00:00:00.000 Started by upstream project "autotest-per-patch" build number 132686 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.020 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:08.388 The recommended git tool is: git 00:00:08.388 using credential 00000000-0000-0000-0000-000000000002 00:00:08.390 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:08.400 Fetching changes from the remote Git repository 00:00:08.402 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:08.412 Using shallow fetch with depth 1 00:00:08.412 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:08.412 > git --version # timeout=10 00:00:08.422 > git --version # 'git version 2.39.2' 00:00:08.422 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:08.433 Setting http proxy: proxy-dmz.intel.com:911 00:00:08.433 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:14.010 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:14.022 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:14.035 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:14.035 > git config core.sparsecheckout # timeout=10 00:00:14.048 > git read-tree -mu HEAD # timeout=10 00:00:14.068 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:14.096 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:14.096 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:14.179 [Pipeline] Start of Pipeline 00:00:14.191 [Pipeline] library 00:00:14.192 Loading library shm_lib@master 00:00:14.193 Library shm_lib@master is cached. Copying from home. 00:00:14.206 [Pipeline] node 00:00:14.212 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:14.214 [Pipeline] { 00:00:14.223 [Pipeline] catchError 00:00:14.224 [Pipeline] { 00:00:14.235 [Pipeline] wrap 00:00:14.243 [Pipeline] { 00:00:14.248 [Pipeline] stage 00:00:14.250 [Pipeline] { (Prologue) 00:00:14.267 [Pipeline] echo 00:00:14.268 Node: VM-host-WFP1 00:00:14.275 [Pipeline] cleanWs 00:00:14.285 [WS-CLEANUP] Deleting project workspace... 00:00:14.285 [WS-CLEANUP] Deferred wipeout is used... 00:00:14.290 [WS-CLEANUP] done 00:00:14.497 [Pipeline] setCustomBuildProperty 00:00:14.624 [Pipeline] httpRequest 00:00:14.983 [Pipeline] echo 00:00:14.984 Sorcerer 10.211.164.20 is alive 00:00:14.993 [Pipeline] retry 00:00:14.995 [Pipeline] { 00:00:15.008 [Pipeline] httpRequest 00:00:15.013 HttpMethod: GET 00:00:15.014 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.014 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.019 Response Code: HTTP/1.1 200 OK 00:00:15.019 Success: Status code 200 is in the accepted range: 200,404 00:00:15.020 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:44.264 [Pipeline] } 00:00:44.281 [Pipeline] // retry 00:00:44.289 [Pipeline] sh 00:00:44.579 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:44.596 [Pipeline] httpRequest 00:00:44.963 [Pipeline] echo 00:00:44.965 Sorcerer 10.211.164.20 is alive 00:00:44.976 [Pipeline] retry 00:00:44.978 [Pipeline] { 00:00:44.993 [Pipeline] httpRequest 00:00:44.998 HttpMethod: GET 00:00:44.999 URL: http://10.211.164.20/packages/spdk_3a4e432ea01f1b98044450ef74d9aa7683626399.tar.gz 00:00:44.999 Sending request to url: http://10.211.164.20/packages/spdk_3a4e432ea01f1b98044450ef74d9aa7683626399.tar.gz 00:00:45.004 Response Code: HTTP/1.1 200 OK 00:00:45.005 Success: Status code 200 is in the accepted range: 200,404 00:00:45.005 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_3a4e432ea01f1b98044450ef74d9aa7683626399.tar.gz 00:05:47.300 [Pipeline] } 00:05:47.317 [Pipeline] // retry 00:05:47.324 [Pipeline] sh 00:05:47.604 + tar --no-same-owner -xf spdk_3a4e432ea01f1b98044450ef74d9aa7683626399.tar.gz 00:05:50.147 [Pipeline] sh 00:05:50.428 + git -C spdk log --oneline -n5 00:05:50.428 3a4e432ea test/nvmf: Drop $RDMA_IP_LIST 00:05:50.428 688351e0e test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:05:50.428 2826724c4 test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:05:50.428 94ae61614 test/nvmf: Prepare replacements for the network setup 00:05:50.428 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:05:50.447 [Pipeline] writeFile 00:05:50.462 [Pipeline] sh 00:05:50.744 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:50.756 [Pipeline] sh 00:05:51.036 + cat autorun-spdk.conf 00:05:51.036 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:51.036 SPDK_TEST_NVMF=1 00:05:51.036 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:51.036 SPDK_TEST_URING=1 00:05:51.036 SPDK_TEST_USDT=1 00:05:51.036 SPDK_RUN_UBSAN=1 00:05:51.036 NET_TYPE=virt 00:05:51.036 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:51.042 RUN_NIGHTLY=0 00:05:51.044 [Pipeline] } 00:05:51.058 [Pipeline] // stage 00:05:51.071 [Pipeline] stage 00:05:51.073 [Pipeline] { (Run VM) 00:05:51.088 [Pipeline] sh 00:05:51.373 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:51.373 + echo 'Start stage prepare_nvme.sh' 00:05:51.373 Start stage prepare_nvme.sh 00:05:51.373 + [[ -n 7 ]] 00:05:51.373 + disk_prefix=ex7 00:05:51.373 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:05:51.373 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:05:51.373 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:05:51.373 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:51.373 ++ SPDK_TEST_NVMF=1 00:05:51.374 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:51.374 ++ SPDK_TEST_URING=1 00:05:51.374 ++ SPDK_TEST_USDT=1 00:05:51.374 ++ SPDK_RUN_UBSAN=1 00:05:51.374 ++ NET_TYPE=virt 00:05:51.374 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:51.374 ++ RUN_NIGHTLY=0 00:05:51.374 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:51.374 + nvme_files=() 00:05:51.374 + declare -A nvme_files 00:05:51.374 + backend_dir=/var/lib/libvirt/images/backends 00:05:51.374 + nvme_files['nvme.img']=5G 00:05:51.374 + nvme_files['nvme-cmb.img']=5G 00:05:51.374 + nvme_files['nvme-multi0.img']=4G 00:05:51.374 + nvme_files['nvme-multi1.img']=4G 00:05:51.374 + nvme_files['nvme-multi2.img']=4G 00:05:51.374 + nvme_files['nvme-openstack.img']=8G 00:05:51.374 + nvme_files['nvme-zns.img']=5G 00:05:51.374 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:51.374 + (( SPDK_TEST_FTL == 1 )) 00:05:51.374 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:51.374 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:51.374 + for nvme in "${!nvme_files[@]}" 00:05:51.374 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:05:51.374 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:51.374 + for nvme in "${!nvme_files[@]}" 00:05:51.374 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:05:51.374 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:51.374 + for nvme in "${!nvme_files[@]}" 00:05:51.374 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:05:51.374 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:51.374 + for nvme in "${!nvme_files[@]}" 00:05:51.374 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:05:51.374 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:51.374 + for nvme in "${!nvme_files[@]}" 00:05:51.374 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:05:51.374 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:51.633 + for nvme in "${!nvme_files[@]}" 00:05:51.633 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:05:51.633 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:51.633 + for nvme in "${!nvme_files[@]}" 00:05:51.633 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:05:51.633 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:51.633 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:05:51.633 + echo 'End stage prepare_nvme.sh' 00:05:51.633 End stage prepare_nvme.sh 00:05:51.644 [Pipeline] sh 00:05:51.926 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:51.926 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:05:51.926 00:05:51.926 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:05:51.926 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:05:51.926 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:51.926 HELP=0 00:05:51.926 DRY_RUN=0 00:05:51.926 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:05:51.926 NVME_DISKS_TYPE=nvme,nvme, 00:05:51.926 NVME_AUTO_CREATE=0 00:05:51.926 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:05:51.927 NVME_CMB=,, 00:05:51.927 NVME_PMR=,, 00:05:51.927 NVME_ZNS=,, 00:05:51.927 NVME_MS=,, 00:05:51.927 NVME_FDP=,, 00:05:51.927 SPDK_VAGRANT_DISTRO=fedora39 00:05:51.927 SPDK_VAGRANT_VMCPU=10 00:05:51.927 SPDK_VAGRANT_VMRAM=12288 00:05:51.927 SPDK_VAGRANT_PROVIDER=libvirt 00:05:51.927 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:51.927 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:51.927 SPDK_OPENSTACK_NETWORK=0 00:05:51.927 VAGRANT_PACKAGE_BOX=0 00:05:51.927 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:51.927 FORCE_DISTRO=true 00:05:51.927 VAGRANT_BOX_VERSION= 00:05:51.927 EXTRA_VAGRANTFILES= 00:05:51.927 NIC_MODEL=e1000 00:05:51.927 00:05:51.927 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:05:51.927 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:05:54.476 Bringing machine 'default' up with 'libvirt' provider... 00:05:55.856 ==> default: Creating image (snapshot of base box volume). 00:05:55.856 ==> default: Creating domain with the following settings... 00:05:55.856 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733395761_d98cb55b58826b03b974 00:05:55.856 ==> default: -- Domain type: kvm 00:05:55.856 ==> default: -- Cpus: 10 00:05:55.856 ==> default: -- Feature: acpi 00:05:55.856 ==> default: -- Feature: apic 00:05:55.856 ==> default: -- Feature: pae 00:05:55.856 ==> default: -- Memory: 12288M 00:05:55.856 ==> default: -- Memory Backing: hugepages: 00:05:55.856 ==> default: -- Management MAC: 00:05:55.856 ==> default: -- Loader: 00:05:55.856 ==> default: -- Nvram: 00:05:55.856 ==> default: -- Base box: spdk/fedora39 00:05:55.856 ==> default: -- Storage pool: default 00:05:55.856 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733395761_d98cb55b58826b03b974.img (20G) 00:05:55.856 ==> default: -- Volume Cache: default 00:05:55.856 ==> default: -- Kernel: 00:05:55.856 ==> default: -- Initrd: 00:05:55.856 ==> default: -- Graphics Type: vnc 00:05:55.856 ==> default: -- Graphics Port: -1 00:05:55.856 ==> default: -- Graphics IP: 127.0.0.1 00:05:55.856 ==> default: -- Graphics Password: Not defined 00:05:55.856 ==> default: -- Video Type: cirrus 00:05:55.856 ==> default: -- Video VRAM: 9216 00:05:55.856 ==> default: -- Sound Type: 00:05:55.856 ==> default: -- Keymap: en-us 00:05:55.856 ==> default: -- TPM Path: 00:05:55.856 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:55.856 ==> default: -- Command line args: 00:05:55.856 ==> default: -> value=-device, 00:05:55.856 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:55.856 ==> default: -> value=-drive, 00:05:55.856 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:05:55.856 ==> default: -> value=-device, 00:05:55.856 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:55.856 ==> default: -> value=-device, 00:05:55.856 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:55.857 ==> default: -> value=-drive, 00:05:55.857 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:55.857 ==> default: -> value=-device, 00:05:55.857 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:55.857 ==> default: -> value=-drive, 00:05:55.857 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:55.857 ==> default: -> value=-device, 00:05:55.857 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:55.857 ==> default: -> value=-drive, 00:05:55.857 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:55.857 ==> default: -> value=-device, 00:05:55.857 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:56.431 ==> default: Creating shared folders metadata... 00:05:56.431 ==> default: Starting domain. 00:05:57.802 ==> default: Waiting for domain to get an IP address... 00:06:15.892 ==> default: Waiting for SSH to become available... 00:06:15.892 ==> default: Configuring and enabling network interfaces... 00:06:20.103 default: SSH address: 192.168.121.180:22 00:06:20.103 default: SSH username: vagrant 00:06:20.103 default: SSH auth method: private key 00:06:22.640 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:32.620 ==> default: Mounting SSHFS shared folder... 00:06:33.558 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:33.558 ==> default: Checking Mount.. 00:06:35.517 ==> default: Folder Successfully Mounted! 00:06:35.517 ==> default: Running provisioner: file... 00:06:36.480 default: ~/.gitconfig => .gitconfig 00:06:37.046 00:06:37.046 SUCCESS! 00:06:37.046 00:06:37.046 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:06:37.046 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:37.046 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:06:37.046 00:06:37.055 [Pipeline] } 00:06:37.071 [Pipeline] // stage 00:06:37.081 [Pipeline] dir 00:06:37.082 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:06:37.083 [Pipeline] { 00:06:37.099 [Pipeline] catchError 00:06:37.101 [Pipeline] { 00:06:37.114 [Pipeline] sh 00:06:37.395 + vagrant ssh-config --host vagrant 00:06:37.395 + sed -ne /^Host/,$p 00:06:37.395 + tee ssh_conf 00:06:40.680 Host vagrant 00:06:40.680 HostName 192.168.121.180 00:06:40.680 User vagrant 00:06:40.680 Port 22 00:06:40.680 UserKnownHostsFile /dev/null 00:06:40.680 StrictHostKeyChecking no 00:06:40.680 PasswordAuthentication no 00:06:40.680 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:40.680 IdentitiesOnly yes 00:06:40.680 LogLevel FATAL 00:06:40.680 ForwardAgent yes 00:06:40.680 ForwardX11 yes 00:06:40.680 00:06:40.696 [Pipeline] withEnv 00:06:40.700 [Pipeline] { 00:06:40.716 [Pipeline] sh 00:06:40.994 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:40.994 source /etc/os-release 00:06:40.994 [[ -e /image.version ]] && img=$(< /image.version) 00:06:40.994 # Minimal, systemd-like check. 00:06:40.994 if [[ -e /.dockerenv ]]; then 00:06:40.994 # Clear garbage from the node's name: 00:06:40.994 # agt-er_autotest_547-896 -> autotest_547-896 00:06:40.994 # $HOSTNAME is the actual container id 00:06:40.994 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:40.994 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:40.994 # We can assume this is a mount from a host where container is running, 00:06:40.994 # so fetch its hostname to easily identify the target swarm worker. 00:06:40.994 container="$(< /etc/hostname) ($agent)" 00:06:40.994 else 00:06:40.994 # Fallback 00:06:40.994 container=$agent 00:06:40.994 fi 00:06:40.994 fi 00:06:40.994 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:40.994 00:06:41.263 [Pipeline] } 00:06:41.282 [Pipeline] // withEnv 00:06:41.290 [Pipeline] setCustomBuildProperty 00:06:41.305 [Pipeline] stage 00:06:41.308 [Pipeline] { (Tests) 00:06:41.324 [Pipeline] sh 00:06:41.604 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:41.878 [Pipeline] sh 00:06:42.161 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:42.434 [Pipeline] timeout 00:06:42.434 Timeout set to expire in 1 hr 0 min 00:06:42.436 [Pipeline] { 00:06:42.454 [Pipeline] sh 00:06:42.744 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:43.311 HEAD is now at 3a4e432ea test/nvmf: Drop $RDMA_IP_LIST 00:06:43.323 [Pipeline] sh 00:06:43.604 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:43.876 [Pipeline] sh 00:06:44.156 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:44.428 [Pipeline] sh 00:06:44.709 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:06:44.968 ++ readlink -f spdk_repo 00:06:44.968 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:44.968 + [[ -n /home/vagrant/spdk_repo ]] 00:06:44.968 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:44.968 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:44.968 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:44.968 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:44.968 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:44.968 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:06:44.968 + cd /home/vagrant/spdk_repo 00:06:44.968 + source /etc/os-release 00:06:44.968 ++ NAME='Fedora Linux' 00:06:44.968 ++ VERSION='39 (Cloud Edition)' 00:06:44.968 ++ ID=fedora 00:06:44.968 ++ VERSION_ID=39 00:06:44.968 ++ VERSION_CODENAME= 00:06:44.968 ++ PLATFORM_ID=platform:f39 00:06:44.968 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:44.968 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:44.968 ++ LOGO=fedora-logo-icon 00:06:44.968 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:44.968 ++ HOME_URL=https://fedoraproject.org/ 00:06:44.968 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:44.968 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:44.968 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:44.968 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:44.968 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:44.968 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:44.968 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:44.968 ++ SUPPORT_END=2024-11-12 00:06:44.968 ++ VARIANT='Cloud Edition' 00:06:44.968 ++ VARIANT_ID=cloud 00:06:44.968 + uname -a 00:06:44.968 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:44.968 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:45.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.593 Hugepages 00:06:45.593 node hugesize free / total 00:06:45.593 node0 1048576kB 0 / 0 00:06:45.593 node0 2048kB 0 / 0 00:06:45.593 00:06:45.593 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:45.593 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:45.593 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:45.593 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:45.594 + rm -f /tmp/spdk-ld-path 00:06:45.594 + source autorun-spdk.conf 00:06:45.594 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:45.594 ++ SPDK_TEST_NVMF=1 00:06:45.594 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:45.594 ++ SPDK_TEST_URING=1 00:06:45.594 ++ SPDK_TEST_USDT=1 00:06:45.594 ++ SPDK_RUN_UBSAN=1 00:06:45.594 ++ NET_TYPE=virt 00:06:45.594 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:45.594 ++ RUN_NIGHTLY=0 00:06:45.594 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:45.594 + [[ -n '' ]] 00:06:45.594 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:45.594 + for M in /var/spdk/build-*-manifest.txt 00:06:45.594 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:45.594 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:45.594 + for M in /var/spdk/build-*-manifest.txt 00:06:45.594 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:45.594 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:45.594 + for M in /var/spdk/build-*-manifest.txt 00:06:45.594 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:45.594 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:45.594 ++ uname 00:06:45.594 + [[ Linux == \L\i\n\u\x ]] 00:06:45.594 + sudo dmesg -T 00:06:45.855 + sudo dmesg --clear 00:06:45.855 + dmesg_pid=5207 00:06:45.855 + [[ Fedora Linux == FreeBSD ]] 00:06:45.855 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:45.855 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:45.855 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:45.855 + sudo dmesg -Tw 00:06:45.855 + [[ -x /usr/src/fio-static/fio ]] 00:06:45.855 + export FIO_BIN=/usr/src/fio-static/fio 00:06:45.855 + FIO_BIN=/usr/src/fio-static/fio 00:06:45.855 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:45.855 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:45.855 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:45.855 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:45.855 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:45.855 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:45.855 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:45.855 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:45.855 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:45.855 10:50:12 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:45.855 10:50:12 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:45.855 10:50:12 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:45.855 10:50:12 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:45.855 10:50:12 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:46.121 10:50:13 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:46.121 10:50:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.121 10:50:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:46.121 10:50:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:46.121 10:50:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.121 10:50:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.121 10:50:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.121 10:50:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.121 10:50:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.121 10:50:13 -- paths/export.sh@5 -- $ export PATH 00:06:46.121 10:50:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.121 10:50:13 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:46.121 10:50:13 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:46.121 10:50:13 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733395813.XXXXXX 00:06:46.121 10:50:13 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733395813.HrgwjJ 00:06:46.121 10:50:13 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:46.121 10:50:13 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:46.121 10:50:13 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:46.121 10:50:13 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:46.121 10:50:13 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:46.121 10:50:13 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:46.121 10:50:13 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:46.121 10:50:13 -- common/autotest_common.sh@10 -- $ set +x 00:06:46.121 10:50:13 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:06:46.121 10:50:13 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:46.121 10:50:13 -- pm/common@17 -- $ local monitor 00:06:46.121 10:50:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:46.121 10:50:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:46.121 10:50:13 -- pm/common@25 -- $ sleep 1 00:06:46.121 10:50:13 -- pm/common@21 -- $ date +%s 00:06:46.121 10:50:13 -- pm/common@21 -- $ date +%s 00:06:46.121 10:50:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733395813 00:06:46.121 10:50:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733395813 00:06:46.121 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733395813_collect-cpu-load.pm.log 00:06:46.121 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733395813_collect-vmstat.pm.log 00:06:47.063 10:50:14 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:47.063 10:50:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:47.063 10:50:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:47.063 10:50:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:47.063 10:50:14 -- spdk/autobuild.sh@16 -- $ date -u 00:06:47.063 Thu Dec 5 10:50:14 AM UTC 2024 00:06:47.063 10:50:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:47.063 v25.01-pre-300-g3a4e432ea 00:06:47.063 10:50:14 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:47.063 10:50:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:47.063 10:50:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:47.063 10:50:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:47.063 10:50:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:47.063 10:50:14 -- common/autotest_common.sh@10 -- $ set +x 00:06:47.063 ************************************ 00:06:47.063 START TEST ubsan 00:06:47.063 ************************************ 00:06:47.063 using ubsan 00:06:47.063 10:50:14 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:47.063 00:06:47.063 real 0m0.001s 00:06:47.063 user 0m0.000s 00:06:47.063 sys 0m0.000s 00:06:47.063 10:50:14 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:47.063 ************************************ 00:06:47.063 END TEST ubsan 00:06:47.064 ************************************ 00:06:47.064 10:50:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:47.064 10:50:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:47.064 10:50:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:47.064 10:50:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:47.064 10:50:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:47.064 10:50:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:47.064 10:50:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:47.064 10:50:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:47.064 10:50:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:47.064 10:50:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:06:47.323 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:47.323 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:47.891 Using 'verbs' RDMA provider 00:07:04.170 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:22.330 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:22.330 Creating mk/config.mk...done. 00:07:22.330 Creating mk/cc.flags.mk...done. 00:07:22.330 Type 'make' to build. 00:07:22.330 10:50:47 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:22.330 10:50:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:22.330 10:50:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:22.330 10:50:47 -- common/autotest_common.sh@10 -- $ set +x 00:07:22.330 ************************************ 00:07:22.330 START TEST make 00:07:22.330 ************************************ 00:07:22.330 10:50:47 make -- common/autotest_common.sh@1129 -- $ make -j10 00:07:22.330 make[1]: Nothing to be done for 'all'. 00:07:32.303 The Meson build system 00:07:32.303 Version: 1.5.0 00:07:32.303 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:32.303 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:32.303 Build type: native build 00:07:32.303 Program cat found: YES (/usr/bin/cat) 00:07:32.303 Project name: DPDK 00:07:32.303 Project version: 24.03.0 00:07:32.303 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:32.303 C linker for the host machine: cc ld.bfd 2.40-14 00:07:32.303 Host machine cpu family: x86_64 00:07:32.303 Host machine cpu: x86_64 00:07:32.303 Message: ## Building in Developer Mode ## 00:07:32.303 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:32.303 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:32.303 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:32.303 Program python3 found: YES (/usr/bin/python3) 00:07:32.303 Program cat found: YES (/usr/bin/cat) 00:07:32.303 Compiler for C supports arguments -march=native: YES 00:07:32.303 Checking for size of "void *" : 8 00:07:32.303 Checking for size of "void *" : 8 (cached) 00:07:32.303 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:32.303 Library m found: YES 00:07:32.303 Library numa found: YES 00:07:32.303 Has header "numaif.h" : YES 00:07:32.303 Library fdt found: NO 00:07:32.303 Library execinfo found: NO 00:07:32.303 Has header "execinfo.h" : YES 00:07:32.303 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:32.303 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:32.303 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:32.303 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:32.303 Run-time dependency openssl found: YES 3.1.1 00:07:32.303 Run-time dependency libpcap found: YES 1.10.4 00:07:32.303 Has header "pcap.h" with dependency libpcap: YES 00:07:32.303 Compiler for C supports arguments -Wcast-qual: YES 00:07:32.303 Compiler for C supports arguments -Wdeprecated: YES 00:07:32.303 Compiler for C supports arguments -Wformat: YES 00:07:32.303 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:32.303 Compiler for C supports arguments -Wformat-security: NO 00:07:32.303 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:32.303 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:32.303 Compiler for C supports arguments -Wnested-externs: YES 00:07:32.303 Compiler for C supports arguments -Wold-style-definition: YES 00:07:32.303 Compiler for C supports arguments -Wpointer-arith: YES 00:07:32.303 Compiler for C supports arguments -Wsign-compare: YES 00:07:32.303 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:32.303 Compiler for C supports arguments -Wundef: YES 00:07:32.303 Compiler for C supports arguments -Wwrite-strings: YES 00:07:32.303 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:32.303 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:32.303 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:32.303 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:32.303 Program objdump found: YES (/usr/bin/objdump) 00:07:32.303 Compiler for C supports arguments -mavx512f: YES 00:07:32.303 Checking if "AVX512 checking" compiles: YES 00:07:32.303 Fetching value of define "__SSE4_2__" : 1 00:07:32.303 Fetching value of define "__AES__" : 1 00:07:32.303 Fetching value of define "__AVX__" : 1 00:07:32.303 Fetching value of define "__AVX2__" : 1 00:07:32.303 Fetching value of define "__AVX512BW__" : 1 00:07:32.303 Fetching value of define "__AVX512CD__" : 1 00:07:32.303 Fetching value of define "__AVX512DQ__" : 1 00:07:32.303 Fetching value of define "__AVX512F__" : 1 00:07:32.303 Fetching value of define "__AVX512VL__" : 1 00:07:32.303 Fetching value of define "__PCLMUL__" : 1 00:07:32.303 Fetching value of define "__RDRND__" : 1 00:07:32.303 Fetching value of define "__RDSEED__" : 1 00:07:32.303 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:32.303 Fetching value of define "__znver1__" : (undefined) 00:07:32.303 Fetching value of define "__znver2__" : (undefined) 00:07:32.303 Fetching value of define "__znver3__" : (undefined) 00:07:32.303 Fetching value of define "__znver4__" : (undefined) 00:07:32.303 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:32.303 Message: lib/log: Defining dependency "log" 00:07:32.303 Message: lib/kvargs: Defining dependency "kvargs" 00:07:32.303 Message: lib/telemetry: Defining dependency "telemetry" 00:07:32.303 Checking for function "getentropy" : NO 00:07:32.303 Message: lib/eal: Defining dependency "eal" 00:07:32.303 Message: lib/ring: Defining dependency "ring" 00:07:32.303 Message: lib/rcu: Defining dependency "rcu" 00:07:32.303 Message: lib/mempool: Defining dependency "mempool" 00:07:32.303 Message: lib/mbuf: Defining dependency "mbuf" 00:07:32.303 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:32.303 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:32.303 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:32.303 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:32.303 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:32.303 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:07:32.303 Compiler for C supports arguments -mpclmul: YES 00:07:32.304 Compiler for C supports arguments -maes: YES 00:07:32.304 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:32.304 Compiler for C supports arguments -mavx512bw: YES 00:07:32.304 Compiler for C supports arguments -mavx512dq: YES 00:07:32.304 Compiler for C supports arguments -mavx512vl: YES 00:07:32.304 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:32.304 Compiler for C supports arguments -mavx2: YES 00:07:32.304 Compiler for C supports arguments -mavx: YES 00:07:32.304 Message: lib/net: Defining dependency "net" 00:07:32.304 Message: lib/meter: Defining dependency "meter" 00:07:32.304 Message: lib/ethdev: Defining dependency "ethdev" 00:07:32.304 Message: lib/pci: Defining dependency "pci" 00:07:32.304 Message: lib/cmdline: Defining dependency "cmdline" 00:07:32.304 Message: lib/hash: Defining dependency "hash" 00:07:32.304 Message: lib/timer: Defining dependency "timer" 00:07:32.304 Message: lib/compressdev: Defining dependency "compressdev" 00:07:32.304 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:32.304 Message: lib/dmadev: Defining dependency "dmadev" 00:07:32.304 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:32.304 Message: lib/power: Defining dependency "power" 00:07:32.304 Message: lib/reorder: Defining dependency "reorder" 00:07:32.304 Message: lib/security: Defining dependency "security" 00:07:32.304 Has header "linux/userfaultfd.h" : YES 00:07:32.304 Has header "linux/vduse.h" : YES 00:07:32.304 Message: lib/vhost: Defining dependency "vhost" 00:07:32.304 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:32.304 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:32.304 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:32.304 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:32.304 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:32.304 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:32.304 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:32.304 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:32.304 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:32.304 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:32.304 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:32.304 Configuring doxy-api-html.conf using configuration 00:07:32.304 Configuring doxy-api-man.conf using configuration 00:07:32.304 Program mandb found: YES (/usr/bin/mandb) 00:07:32.304 Program sphinx-build found: NO 00:07:32.304 Configuring rte_build_config.h using configuration 00:07:32.304 Message: 00:07:32.304 ================= 00:07:32.304 Applications Enabled 00:07:32.304 ================= 00:07:32.304 00:07:32.304 apps: 00:07:32.304 00:07:32.304 00:07:32.304 Message: 00:07:32.304 ================= 00:07:32.304 Libraries Enabled 00:07:32.304 ================= 00:07:32.304 00:07:32.304 libs: 00:07:32.304 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:32.304 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:32.304 cryptodev, dmadev, power, reorder, security, vhost, 00:07:32.304 00:07:32.304 Message: 00:07:32.304 =============== 00:07:32.304 Drivers Enabled 00:07:32.304 =============== 00:07:32.304 00:07:32.304 common: 00:07:32.304 00:07:32.304 bus: 00:07:32.304 pci, vdev, 00:07:32.304 mempool: 00:07:32.304 ring, 00:07:32.304 dma: 00:07:32.304 00:07:32.304 net: 00:07:32.304 00:07:32.304 crypto: 00:07:32.304 00:07:32.304 compress: 00:07:32.304 00:07:32.304 vdpa: 00:07:32.304 00:07:32.304 00:07:32.304 Message: 00:07:32.304 ================= 00:07:32.304 Content Skipped 00:07:32.304 ================= 00:07:32.304 00:07:32.304 apps: 00:07:32.304 dumpcap: explicitly disabled via build config 00:07:32.304 graph: explicitly disabled via build config 00:07:32.304 pdump: explicitly disabled via build config 00:07:32.304 proc-info: explicitly disabled via build config 00:07:32.304 test-acl: explicitly disabled via build config 00:07:32.304 test-bbdev: explicitly disabled via build config 00:07:32.304 test-cmdline: explicitly disabled via build config 00:07:32.304 test-compress-perf: explicitly disabled via build config 00:07:32.304 test-crypto-perf: explicitly disabled via build config 00:07:32.304 test-dma-perf: explicitly disabled via build config 00:07:32.304 test-eventdev: explicitly disabled via build config 00:07:32.304 test-fib: explicitly disabled via build config 00:07:32.304 test-flow-perf: explicitly disabled via build config 00:07:32.304 test-gpudev: explicitly disabled via build config 00:07:32.304 test-mldev: explicitly disabled via build config 00:07:32.304 test-pipeline: explicitly disabled via build config 00:07:32.304 test-pmd: explicitly disabled via build config 00:07:32.304 test-regex: explicitly disabled via build config 00:07:32.304 test-sad: explicitly disabled via build config 00:07:32.304 test-security-perf: explicitly disabled via build config 00:07:32.304 00:07:32.304 libs: 00:07:32.304 argparse: explicitly disabled via build config 00:07:32.304 metrics: explicitly disabled via build config 00:07:32.304 acl: explicitly disabled via build config 00:07:32.304 bbdev: explicitly disabled via build config 00:07:32.304 bitratestats: explicitly disabled via build config 00:07:32.304 bpf: explicitly disabled via build config 00:07:32.304 cfgfile: explicitly disabled via build config 00:07:32.304 distributor: explicitly disabled via build config 00:07:32.304 efd: explicitly disabled via build config 00:07:32.304 eventdev: explicitly disabled via build config 00:07:32.304 dispatcher: explicitly disabled via build config 00:07:32.304 gpudev: explicitly disabled via build config 00:07:32.304 gro: explicitly disabled via build config 00:07:32.304 gso: explicitly disabled via build config 00:07:32.304 ip_frag: explicitly disabled via build config 00:07:32.304 jobstats: explicitly disabled via build config 00:07:32.304 latencystats: explicitly disabled via build config 00:07:32.304 lpm: explicitly disabled via build config 00:07:32.304 member: explicitly disabled via build config 00:07:32.304 pcapng: explicitly disabled via build config 00:07:32.304 rawdev: explicitly disabled via build config 00:07:32.304 regexdev: explicitly disabled via build config 00:07:32.304 mldev: explicitly disabled via build config 00:07:32.304 rib: explicitly disabled via build config 00:07:32.304 sched: explicitly disabled via build config 00:07:32.304 stack: explicitly disabled via build config 00:07:32.304 ipsec: explicitly disabled via build config 00:07:32.304 pdcp: explicitly disabled via build config 00:07:32.304 fib: explicitly disabled via build config 00:07:32.304 port: explicitly disabled via build config 00:07:32.304 pdump: explicitly disabled via build config 00:07:32.304 table: explicitly disabled via build config 00:07:32.304 pipeline: explicitly disabled via build config 00:07:32.304 graph: explicitly disabled via build config 00:07:32.304 node: explicitly disabled via build config 00:07:32.304 00:07:32.304 drivers: 00:07:32.304 common/cpt: not in enabled drivers build config 00:07:32.304 common/dpaax: not in enabled drivers build config 00:07:32.304 common/iavf: not in enabled drivers build config 00:07:32.304 common/idpf: not in enabled drivers build config 00:07:32.304 common/ionic: not in enabled drivers build config 00:07:32.304 common/mvep: not in enabled drivers build config 00:07:32.304 common/octeontx: not in enabled drivers build config 00:07:32.304 bus/auxiliary: not in enabled drivers build config 00:07:32.304 bus/cdx: not in enabled drivers build config 00:07:32.304 bus/dpaa: not in enabled drivers build config 00:07:32.304 bus/fslmc: not in enabled drivers build config 00:07:32.305 bus/ifpga: not in enabled drivers build config 00:07:32.305 bus/platform: not in enabled drivers build config 00:07:32.305 bus/uacce: not in enabled drivers build config 00:07:32.305 bus/vmbus: not in enabled drivers build config 00:07:32.305 common/cnxk: not in enabled drivers build config 00:07:32.305 common/mlx5: not in enabled drivers build config 00:07:32.305 common/nfp: not in enabled drivers build config 00:07:32.305 common/nitrox: not in enabled drivers build config 00:07:32.305 common/qat: not in enabled drivers build config 00:07:32.305 common/sfc_efx: not in enabled drivers build config 00:07:32.305 mempool/bucket: not in enabled drivers build config 00:07:32.305 mempool/cnxk: not in enabled drivers build config 00:07:32.305 mempool/dpaa: not in enabled drivers build config 00:07:32.305 mempool/dpaa2: not in enabled drivers build config 00:07:32.305 mempool/octeontx: not in enabled drivers build config 00:07:32.305 mempool/stack: not in enabled drivers build config 00:07:32.305 dma/cnxk: not in enabled drivers build config 00:07:32.305 dma/dpaa: not in enabled drivers build config 00:07:32.305 dma/dpaa2: not in enabled drivers build config 00:07:32.305 dma/hisilicon: not in enabled drivers build config 00:07:32.305 dma/idxd: not in enabled drivers build config 00:07:32.305 dma/ioat: not in enabled drivers build config 00:07:32.305 dma/skeleton: not in enabled drivers build config 00:07:32.305 net/af_packet: not in enabled drivers build config 00:07:32.305 net/af_xdp: not in enabled drivers build config 00:07:32.305 net/ark: not in enabled drivers build config 00:07:32.305 net/atlantic: not in enabled drivers build config 00:07:32.305 net/avp: not in enabled drivers build config 00:07:32.305 net/axgbe: not in enabled drivers build config 00:07:32.305 net/bnx2x: not in enabled drivers build config 00:07:32.305 net/bnxt: not in enabled drivers build config 00:07:32.305 net/bonding: not in enabled drivers build config 00:07:32.305 net/cnxk: not in enabled drivers build config 00:07:32.305 net/cpfl: not in enabled drivers build config 00:07:32.305 net/cxgbe: not in enabled drivers build config 00:07:32.305 net/dpaa: not in enabled drivers build config 00:07:32.305 net/dpaa2: not in enabled drivers build config 00:07:32.305 net/e1000: not in enabled drivers build config 00:07:32.305 net/ena: not in enabled drivers build config 00:07:32.305 net/enetc: not in enabled drivers build config 00:07:32.305 net/enetfec: not in enabled drivers build config 00:07:32.305 net/enic: not in enabled drivers build config 00:07:32.305 net/failsafe: not in enabled drivers build config 00:07:32.305 net/fm10k: not in enabled drivers build config 00:07:32.305 net/gve: not in enabled drivers build config 00:07:32.305 net/hinic: not in enabled drivers build config 00:07:32.305 net/hns3: not in enabled drivers build config 00:07:32.305 net/i40e: not in enabled drivers build config 00:07:32.305 net/iavf: not in enabled drivers build config 00:07:32.305 net/ice: not in enabled drivers build config 00:07:32.305 net/idpf: not in enabled drivers build config 00:07:32.305 net/igc: not in enabled drivers build config 00:07:32.305 net/ionic: not in enabled drivers build config 00:07:32.305 net/ipn3ke: not in enabled drivers build config 00:07:32.305 net/ixgbe: not in enabled drivers build config 00:07:32.305 net/mana: not in enabled drivers build config 00:07:32.305 net/memif: not in enabled drivers build config 00:07:32.305 net/mlx4: not in enabled drivers build config 00:07:32.305 net/mlx5: not in enabled drivers build config 00:07:32.305 net/mvneta: not in enabled drivers build config 00:07:32.305 net/mvpp2: not in enabled drivers build config 00:07:32.305 net/netvsc: not in enabled drivers build config 00:07:32.305 net/nfb: not in enabled drivers build config 00:07:32.305 net/nfp: not in enabled drivers build config 00:07:32.305 net/ngbe: not in enabled drivers build config 00:07:32.305 net/null: not in enabled drivers build config 00:07:32.305 net/octeontx: not in enabled drivers build config 00:07:32.305 net/octeon_ep: not in enabled drivers build config 00:07:32.305 net/pcap: not in enabled drivers build config 00:07:32.305 net/pfe: not in enabled drivers build config 00:07:32.305 net/qede: not in enabled drivers build config 00:07:32.305 net/ring: not in enabled drivers build config 00:07:32.305 net/sfc: not in enabled drivers build config 00:07:32.305 net/softnic: not in enabled drivers build config 00:07:32.305 net/tap: not in enabled drivers build config 00:07:32.305 net/thunderx: not in enabled drivers build config 00:07:32.305 net/txgbe: not in enabled drivers build config 00:07:32.305 net/vdev_netvsc: not in enabled drivers build config 00:07:32.305 net/vhost: not in enabled drivers build config 00:07:32.305 net/virtio: not in enabled drivers build config 00:07:32.305 net/vmxnet3: not in enabled drivers build config 00:07:32.305 raw/*: missing internal dependency, "rawdev" 00:07:32.305 crypto/armv8: not in enabled drivers build config 00:07:32.305 crypto/bcmfs: not in enabled drivers build config 00:07:32.305 crypto/caam_jr: not in enabled drivers build config 00:07:32.305 crypto/ccp: not in enabled drivers build config 00:07:32.305 crypto/cnxk: not in enabled drivers build config 00:07:32.305 crypto/dpaa_sec: not in enabled drivers build config 00:07:32.305 crypto/dpaa2_sec: not in enabled drivers build config 00:07:32.305 crypto/ipsec_mb: not in enabled drivers build config 00:07:32.305 crypto/mlx5: not in enabled drivers build config 00:07:32.305 crypto/mvsam: not in enabled drivers build config 00:07:32.305 crypto/nitrox: not in enabled drivers build config 00:07:32.305 crypto/null: not in enabled drivers build config 00:07:32.305 crypto/octeontx: not in enabled drivers build config 00:07:32.305 crypto/openssl: not in enabled drivers build config 00:07:32.305 crypto/scheduler: not in enabled drivers build config 00:07:32.305 crypto/uadk: not in enabled drivers build config 00:07:32.305 crypto/virtio: not in enabled drivers build config 00:07:32.305 compress/isal: not in enabled drivers build config 00:07:32.305 compress/mlx5: not in enabled drivers build config 00:07:32.305 compress/nitrox: not in enabled drivers build config 00:07:32.305 compress/octeontx: not in enabled drivers build config 00:07:32.305 compress/zlib: not in enabled drivers build config 00:07:32.305 regex/*: missing internal dependency, "regexdev" 00:07:32.305 ml/*: missing internal dependency, "mldev" 00:07:32.305 vdpa/ifc: not in enabled drivers build config 00:07:32.305 vdpa/mlx5: not in enabled drivers build config 00:07:32.305 vdpa/nfp: not in enabled drivers build config 00:07:32.305 vdpa/sfc: not in enabled drivers build config 00:07:32.305 event/*: missing internal dependency, "eventdev" 00:07:32.305 baseband/*: missing internal dependency, "bbdev" 00:07:32.305 gpu/*: missing internal dependency, "gpudev" 00:07:32.305 00:07:32.305 00:07:32.564 Build targets in project: 85 00:07:32.564 00:07:32.564 DPDK 24.03.0 00:07:32.564 00:07:32.564 User defined options 00:07:32.564 buildtype : debug 00:07:32.564 default_library : shared 00:07:32.564 libdir : lib 00:07:32.564 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:32.564 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:32.564 c_link_args : 00:07:32.564 cpu_instruction_set: native 00:07:32.564 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:32.564 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:32.564 enable_docs : false 00:07:32.564 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:32.564 enable_kmods : false 00:07:32.564 max_lcores : 128 00:07:32.564 tests : false 00:07:32.564 00:07:32.564 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:33.132 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:33.132 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:33.132 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:33.132 [3/268] Linking static target lib/librte_kvargs.a 00:07:33.132 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:33.391 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:33.391 [6/268] Linking static target lib/librte_log.a 00:07:33.649 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:33.649 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:33.649 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:33.649 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:33.649 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:33.649 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.649 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:33.907 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:33.907 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:33.907 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:33.907 [17/268] Linking static target lib/librte_telemetry.a 00:07:33.907 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:34.165 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:34.423 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:34.423 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.423 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:34.423 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:34.423 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:34.423 [25/268] Linking target lib/librte_log.so.24.1 00:07:34.423 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:34.423 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:34.423 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:34.681 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:34.681 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:34.681 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:34.940 [32/268] Linking target lib/librte_kvargs.so.24.1 00:07:34.940 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.940 [34/268] Linking target lib/librte_telemetry.so.24.1 00:07:34.940 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:34.940 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:34.940 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:34.940 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:35.199 [39/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:35.199 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:35.199 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:35.199 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:35.199 [43/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:35.199 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:35.199 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:35.199 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:35.199 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:35.199 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:35.767 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:35.767 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:35.767 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:35.767 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:35.767 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:35.767 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:35.767 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:35.768 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:36.027 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:36.027 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:36.027 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:36.027 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:36.285 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:36.285 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:36.285 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:36.541 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:36.541 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:36.541 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:36.541 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:36.798 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:36.798 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:36.798 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:37.056 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:37.056 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:37.056 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:37.056 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:37.056 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:37.056 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:37.313 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:37.313 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:37.570 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:37.570 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:37.570 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:37.570 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:37.570 [83/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:37.570 [84/268] Linking static target lib/librte_rcu.a 00:07:37.827 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:37.827 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:37.827 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:37.827 [88/268] Linking static target lib/librte_ring.a 00:07:37.827 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:37.827 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:37.827 [91/268] Linking static target lib/librte_eal.a 00:07:38.085 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:38.085 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:38.085 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:38.085 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:38.341 [96/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:38.341 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:38.341 [98/268] Linking static target lib/librte_mbuf.a 00:07:38.341 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:38.341 [100/268] Linking static target lib/librte_mempool.a 00:07:38.341 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:38.341 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:38.341 [103/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:38.341 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:38.597 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:38.597 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:38.597 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:38.597 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:38.597 [109/268] Linking static target lib/librte_net.a 00:07:38.853 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:38.853 [111/268] Linking static target lib/librte_meter.a 00:07:38.853 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:38.853 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:39.110 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:39.110 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:39.110 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.367 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.624 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.624 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:39.624 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:39.624 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:39.624 [122/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:39.882 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:39.882 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:40.140 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:40.140 [126/268] Linking static target lib/librte_pci.a 00:07:40.140 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:40.140 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:40.140 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:40.398 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:40.398 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:40.398 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:40.398 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:40.398 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:40.398 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:40.398 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:40.398 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:40.398 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.656 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:40.656 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:40.656 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:40.656 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:40.656 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:40.656 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:40.656 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:40.656 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:40.914 [147/268] Linking static target lib/librte_cmdline.a 00:07:40.914 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:40.914 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:40.914 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:40.914 [151/268] Linking static target lib/librte_ethdev.a 00:07:41.172 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:41.430 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:41.430 [154/268] Linking static target lib/librte_timer.a 00:07:41.430 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:41.430 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:41.430 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:41.688 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:41.688 [159/268] Linking static target lib/librte_hash.a 00:07:41.688 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:41.688 [161/268] Linking static target lib/librte_compressdev.a 00:07:41.947 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:41.947 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:41.947 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:41.947 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:42.206 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:42.206 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:42.206 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:42.206 [169/268] Linking static target lib/librte_dmadev.a 00:07:42.464 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:42.464 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:42.722 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:42.722 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:42.722 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:42.722 [175/268] Linking static target lib/librte_cryptodev.a 00:07:42.722 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:43.123 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:43.123 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:43.123 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:43.123 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:43.123 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:43.383 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:43.383 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:43.383 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:43.383 [185/268] Linking static target lib/librte_power.a 00:07:43.383 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:43.642 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:43.642 [188/268] Linking static target lib/librte_reorder.a 00:07:43.901 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:43.901 [190/268] Linking static target lib/librte_security.a 00:07:43.901 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:43.901 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:43.901 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:44.160 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:44.160 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.727 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.727 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:44.727 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:44.727 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:44.984 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:44.985 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:45.242 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:45.242 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:45.242 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:45.242 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:45.500 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.500 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:45.501 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:45.501 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:45.501 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:45.758 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:45.758 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:45.759 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:45.759 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:45.759 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:45.759 [216/268] Linking static target drivers/librte_bus_pci.a 00:07:45.759 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:45.759 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:46.016 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:46.016 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:46.016 [221/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:46.016 [222/268] Linking static target drivers/librte_bus_vdev.a 00:07:46.016 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:46.016 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:46.016 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:46.016 [226/268] Linking static target drivers/librte_mempool_ring.a 00:07:46.275 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:46.275 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:47.211 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:47.211 [230/268] Linking static target lib/librte_vhost.a 00:07:49.108 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:50.041 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:50.299 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:50.558 [234/268] Linking target lib/librte_eal.so.24.1 00:07:50.558 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:50.817 [236/268] Linking target lib/librte_pci.so.24.1 00:07:50.817 [237/268] Linking target lib/librte_ring.so.24.1 00:07:50.817 [238/268] Linking target lib/librte_meter.so.24.1 00:07:50.817 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:50.817 [240/268] Linking target lib/librte_dmadev.so.24.1 00:07:50.817 [241/268] Linking target lib/librte_timer.so.24.1 00:07:50.817 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:50.817 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:50.817 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:50.817 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:50.817 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:50.817 [247/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:51.075 [248/268] Linking target lib/librte_rcu.so.24.1 00:07:51.075 [249/268] Linking target lib/librte_mempool.so.24.1 00:07:51.075 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:51.075 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:51.334 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:51.334 [253/268] Linking target lib/librte_mbuf.so.24.1 00:07:51.334 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:51.334 [255/268] Linking target lib/librte_reorder.so.24.1 00:07:51.592 [256/268] Linking target lib/librte_net.so.24.1 00:07:51.592 [257/268] Linking target lib/librte_compressdev.so.24.1 00:07:51.592 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:07:51.592 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:51.592 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:51.850 [261/268] Linking target lib/librte_hash.so.24.1 00:07:51.850 [262/268] Linking target lib/librte_security.so.24.1 00:07:51.850 [263/268] Linking target lib/librte_cmdline.so.24.1 00:07:51.850 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:51.850 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:51.850 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:52.109 [267/268] Linking target lib/librte_power.so.24.1 00:07:52.109 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:52.109 INFO: autodetecting backend as ninja 00:07:52.109 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:18.690 CC lib/ut/ut.o 00:08:18.690 CC lib/log/log.o 00:08:18.690 CC lib/log/log_flags.o 00:08:18.690 CC lib/log/log_deprecated.o 00:08:18.690 CC lib/ut_mock/mock.o 00:08:18.690 LIB libspdk_ut.a 00:08:18.690 LIB libspdk_log.a 00:08:18.690 SO libspdk_ut.so.2.0 00:08:18.690 LIB libspdk_ut_mock.a 00:08:18.690 SO libspdk_log.so.7.1 00:08:18.690 SO libspdk_ut_mock.so.6.0 00:08:18.690 SYMLINK libspdk_ut.so 00:08:18.690 SYMLINK libspdk_log.so 00:08:18.690 SYMLINK libspdk_ut_mock.so 00:08:18.690 CC lib/dma/dma.o 00:08:18.690 CXX lib/trace_parser/trace.o 00:08:18.690 CC lib/util/base64.o 00:08:18.690 CC lib/util/cpuset.o 00:08:18.690 CC lib/util/bit_array.o 00:08:18.690 CC lib/util/crc32.o 00:08:18.690 CC lib/util/crc32c.o 00:08:18.690 CC lib/util/crc16.o 00:08:18.690 CC lib/ioat/ioat.o 00:08:18.690 CC lib/vfio_user/host/vfio_user_pci.o 00:08:18.690 CC lib/util/crc32_ieee.o 00:08:18.690 CC lib/vfio_user/host/vfio_user.o 00:08:18.690 LIB libspdk_dma.a 00:08:18.690 CC lib/util/crc64.o 00:08:18.690 CC lib/util/dif.o 00:08:18.690 CC lib/util/fd.o 00:08:18.690 SO libspdk_dma.so.5.0 00:08:18.690 SYMLINK libspdk_dma.so 00:08:18.690 CC lib/util/fd_group.o 00:08:18.690 CC lib/util/file.o 00:08:18.690 CC lib/util/hexlify.o 00:08:18.690 CC lib/util/iov.o 00:08:18.690 LIB libspdk_ioat.a 00:08:18.690 CC lib/util/math.o 00:08:18.690 SO libspdk_ioat.so.7.0 00:08:18.690 LIB libspdk_vfio_user.a 00:08:18.690 CC lib/util/net.o 00:08:18.690 SYMLINK libspdk_ioat.so 00:08:18.690 CC lib/util/pipe.o 00:08:18.690 SO libspdk_vfio_user.so.5.0 00:08:18.690 CC lib/util/strerror_tls.o 00:08:18.690 SYMLINK libspdk_vfio_user.so 00:08:18.690 CC lib/util/string.o 00:08:18.690 CC lib/util/uuid.o 00:08:18.690 CC lib/util/xor.o 00:08:18.690 CC lib/util/zipf.o 00:08:18.690 CC lib/util/md5.o 00:08:18.690 LIB libspdk_util.a 00:08:18.690 SO libspdk_util.so.10.1 00:08:18.690 LIB libspdk_trace_parser.a 00:08:18.690 SO libspdk_trace_parser.so.6.0 00:08:18.690 SYMLINK libspdk_util.so 00:08:18.690 SYMLINK libspdk_trace_parser.so 00:08:18.690 CC lib/vmd/led.o 00:08:18.690 CC lib/vmd/vmd.o 00:08:18.690 CC lib/rdma_utils/rdma_utils.o 00:08:18.690 CC lib/idxd/idxd.o 00:08:18.690 CC lib/idxd/idxd_user.o 00:08:18.690 CC lib/idxd/idxd_kernel.o 00:08:18.690 CC lib/json/json_util.o 00:08:18.690 CC lib/json/json_parse.o 00:08:18.690 CC lib/env_dpdk/env.o 00:08:18.690 CC lib/conf/conf.o 00:08:18.690 CC lib/json/json_write.o 00:08:18.690 CC lib/env_dpdk/memory.o 00:08:18.690 CC lib/env_dpdk/pci.o 00:08:18.690 LIB libspdk_conf.a 00:08:18.690 CC lib/env_dpdk/init.o 00:08:18.690 CC lib/env_dpdk/threads.o 00:08:18.690 LIB libspdk_rdma_utils.a 00:08:18.690 SO libspdk_conf.so.6.0 00:08:18.690 SO libspdk_rdma_utils.so.1.0 00:08:18.690 SYMLINK libspdk_conf.so 00:08:18.690 CC lib/env_dpdk/pci_ioat.o 00:08:18.690 SYMLINK libspdk_rdma_utils.so 00:08:18.690 CC lib/env_dpdk/pci_virtio.o 00:08:18.690 CC lib/env_dpdk/pci_vmd.o 00:08:18.690 LIB libspdk_json.a 00:08:18.690 SO libspdk_json.so.6.0 00:08:18.690 CC lib/env_dpdk/pci_idxd.o 00:08:18.690 SYMLINK libspdk_json.so 00:08:18.690 LIB libspdk_idxd.a 00:08:18.690 CC lib/env_dpdk/pci_event.o 00:08:18.690 CC lib/env_dpdk/sigbus_handler.o 00:08:18.690 SO libspdk_idxd.so.12.1 00:08:18.690 LIB libspdk_vmd.a 00:08:18.690 SO libspdk_vmd.so.6.0 00:08:18.690 CC lib/env_dpdk/pci_dpdk.o 00:08:18.690 SYMLINK libspdk_idxd.so 00:08:18.690 CC lib/rdma_provider/common.o 00:08:18.690 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:18.690 SYMLINK libspdk_vmd.so 00:08:18.690 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:18.690 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:18.690 CC lib/jsonrpc/jsonrpc_server.o 00:08:18.690 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:18.690 CC lib/jsonrpc/jsonrpc_client.o 00:08:18.690 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:18.690 LIB libspdk_rdma_provider.a 00:08:18.690 SO libspdk_rdma_provider.so.7.0 00:08:18.690 SYMLINK libspdk_rdma_provider.so 00:08:18.690 LIB libspdk_jsonrpc.a 00:08:18.690 SO libspdk_jsonrpc.so.6.0 00:08:18.690 SYMLINK libspdk_jsonrpc.so 00:08:18.690 LIB libspdk_env_dpdk.a 00:08:18.950 SO libspdk_env_dpdk.so.15.1 00:08:18.950 CC lib/rpc/rpc.o 00:08:18.950 SYMLINK libspdk_env_dpdk.so 00:08:19.209 LIB libspdk_rpc.a 00:08:19.209 SO libspdk_rpc.so.6.0 00:08:19.467 SYMLINK libspdk_rpc.so 00:08:19.726 CC lib/keyring/keyring.o 00:08:19.726 CC lib/keyring/keyring_rpc.o 00:08:19.726 CC lib/trace/trace.o 00:08:19.726 CC lib/trace/trace_rpc.o 00:08:19.726 CC lib/trace/trace_flags.o 00:08:19.726 CC lib/notify/notify.o 00:08:19.726 CC lib/notify/notify_rpc.o 00:08:19.985 LIB libspdk_notify.a 00:08:19.985 LIB libspdk_keyring.a 00:08:19.985 SO libspdk_notify.so.6.0 00:08:19.985 LIB libspdk_trace.a 00:08:19.985 SO libspdk_keyring.so.2.0 00:08:19.985 SYMLINK libspdk_notify.so 00:08:19.985 SO libspdk_trace.so.11.0 00:08:19.985 SYMLINK libspdk_keyring.so 00:08:20.244 SYMLINK libspdk_trace.so 00:08:20.502 CC lib/thread/thread.o 00:08:20.502 CC lib/thread/iobuf.o 00:08:20.502 CC lib/sock/sock.o 00:08:20.502 CC lib/sock/sock_rpc.o 00:08:21.087 LIB libspdk_sock.a 00:08:21.087 SO libspdk_sock.so.10.0 00:08:21.087 SYMLINK libspdk_sock.so 00:08:21.655 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:21.655 CC lib/nvme/nvme_ctrlr.o 00:08:21.655 CC lib/nvme/nvme_fabric.o 00:08:21.655 CC lib/nvme/nvme_ns.o 00:08:21.655 CC lib/nvme/nvme_pcie_common.o 00:08:21.655 CC lib/nvme/nvme_ns_cmd.o 00:08:21.655 CC lib/nvme/nvme_pcie.o 00:08:21.655 CC lib/nvme/nvme_qpair.o 00:08:21.655 CC lib/nvme/nvme.o 00:08:21.913 LIB libspdk_thread.a 00:08:21.913 SO libspdk_thread.so.11.0 00:08:22.171 SYMLINK libspdk_thread.so 00:08:22.171 CC lib/nvme/nvme_quirks.o 00:08:22.171 CC lib/nvme/nvme_transport.o 00:08:22.171 CC lib/nvme/nvme_discovery.o 00:08:22.429 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:22.429 CC lib/accel/accel.o 00:08:22.687 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:22.687 CC lib/accel/accel_rpc.o 00:08:22.687 CC lib/nvme/nvme_tcp.o 00:08:22.687 CC lib/blob/blobstore.o 00:08:22.687 CC lib/accel/accel_sw.o 00:08:22.687 CC lib/nvme/nvme_opal.o 00:08:22.945 CC lib/blob/request.o 00:08:22.945 CC lib/init/json_config.o 00:08:23.203 CC lib/init/subsystem.o 00:08:23.203 CC lib/virtio/virtio.o 00:08:23.203 CC lib/virtio/virtio_vhost_user.o 00:08:23.203 CC lib/init/subsystem_rpc.o 00:08:23.203 CC lib/init/rpc.o 00:08:23.462 CC lib/virtio/virtio_vfio_user.o 00:08:23.462 CC lib/nvme/nvme_io_msg.o 00:08:23.462 CC lib/blob/zeroes.o 00:08:23.462 CC lib/fsdev/fsdev.o 00:08:23.462 CC lib/virtio/virtio_pci.o 00:08:23.462 LIB libspdk_init.a 00:08:23.462 LIB libspdk_accel.a 00:08:23.462 SO libspdk_init.so.6.0 00:08:23.462 CC lib/nvme/nvme_poll_group.o 00:08:23.462 SO libspdk_accel.so.16.0 00:08:23.722 SYMLINK libspdk_init.so 00:08:23.722 CC lib/blob/blob_bs_dev.o 00:08:23.722 CC lib/fsdev/fsdev_io.o 00:08:23.722 SYMLINK libspdk_accel.so 00:08:23.722 LIB libspdk_virtio.a 00:08:23.722 CC lib/event/app.o 00:08:23.722 SO libspdk_virtio.so.7.0 00:08:23.722 CC lib/bdev/bdev.o 00:08:23.722 CC lib/bdev/bdev_rpc.o 00:08:23.980 SYMLINK libspdk_virtio.so 00:08:23.980 CC lib/bdev/bdev_zone.o 00:08:23.980 CC lib/bdev/part.o 00:08:23.980 CC lib/bdev/scsi_nvme.o 00:08:23.980 CC lib/fsdev/fsdev_rpc.o 00:08:23.980 CC lib/nvme/nvme_zns.o 00:08:24.238 CC lib/nvme/nvme_stubs.o 00:08:24.238 CC lib/nvme/nvme_auth.o 00:08:24.238 CC lib/event/reactor.o 00:08:24.238 LIB libspdk_fsdev.a 00:08:24.238 CC lib/event/log_rpc.o 00:08:24.238 CC lib/event/app_rpc.o 00:08:24.238 SO libspdk_fsdev.so.2.0 00:08:24.238 CC lib/event/scheduler_static.o 00:08:24.238 SYMLINK libspdk_fsdev.so 00:08:24.497 CC lib/nvme/nvme_cuse.o 00:08:24.497 CC lib/nvme/nvme_rdma.o 00:08:24.497 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:24.497 LIB libspdk_event.a 00:08:24.756 SO libspdk_event.so.14.0 00:08:24.756 SYMLINK libspdk_event.so 00:08:25.016 LIB libspdk_fuse_dispatcher.a 00:08:25.016 SO libspdk_fuse_dispatcher.so.1.0 00:08:25.277 SYMLINK libspdk_fuse_dispatcher.so 00:08:25.535 LIB libspdk_blob.a 00:08:25.535 SO libspdk_blob.so.12.0 00:08:25.794 LIB libspdk_nvme.a 00:08:25.794 SYMLINK libspdk_blob.so 00:08:25.794 SO libspdk_nvme.so.15.0 00:08:26.053 CC lib/blobfs/tree.o 00:08:26.053 CC lib/blobfs/blobfs.o 00:08:26.053 CC lib/lvol/lvol.o 00:08:26.053 SYMLINK libspdk_nvme.so 00:08:26.310 LIB libspdk_bdev.a 00:08:26.568 SO libspdk_bdev.so.17.0 00:08:26.568 SYMLINK libspdk_bdev.so 00:08:26.826 CC lib/nbd/nbd_rpc.o 00:08:26.826 CC lib/nbd/nbd.o 00:08:26.826 LIB libspdk_blobfs.a 00:08:26.826 CC lib/ftl/ftl_core.o 00:08:26.826 CC lib/ftl/ftl_init.o 00:08:26.826 CC lib/ftl/ftl_layout.o 00:08:26.826 CC lib/nvmf/ctrlr.o 00:08:26.826 CC lib/scsi/dev.o 00:08:26.826 CC lib/ublk/ublk.o 00:08:26.826 SO libspdk_blobfs.so.11.0 00:08:27.085 SYMLINK libspdk_blobfs.so 00:08:27.085 CC lib/ublk/ublk_rpc.o 00:08:27.085 LIB libspdk_lvol.a 00:08:27.085 SO libspdk_lvol.so.11.0 00:08:27.085 CC lib/ftl/ftl_debug.o 00:08:27.085 CC lib/ftl/ftl_io.o 00:08:27.085 CC lib/scsi/lun.o 00:08:27.085 SYMLINK libspdk_lvol.so 00:08:27.085 CC lib/ftl/ftl_sb.o 00:08:27.085 CC lib/ftl/ftl_l2p.o 00:08:27.085 CC lib/nvmf/ctrlr_discovery.o 00:08:27.342 CC lib/nvmf/ctrlr_bdev.o 00:08:27.342 CC lib/nvmf/subsystem.o 00:08:27.342 LIB libspdk_nbd.a 00:08:27.342 CC lib/ftl/ftl_l2p_flat.o 00:08:27.342 CC lib/scsi/port.o 00:08:27.342 SO libspdk_nbd.so.7.0 00:08:27.342 CC lib/ftl/ftl_nv_cache.o 00:08:27.342 CC lib/nvmf/nvmf.o 00:08:27.342 SYMLINK libspdk_nbd.so 00:08:27.342 CC lib/nvmf/nvmf_rpc.o 00:08:27.599 LIB libspdk_ublk.a 00:08:27.599 CC lib/scsi/scsi.o 00:08:27.599 SO libspdk_ublk.so.3.0 00:08:27.599 CC lib/scsi/scsi_bdev.o 00:08:27.599 SYMLINK libspdk_ublk.so 00:08:27.599 CC lib/nvmf/transport.o 00:08:27.599 CC lib/scsi/scsi_pr.o 00:08:27.599 CC lib/ftl/ftl_band.o 00:08:27.857 CC lib/nvmf/tcp.o 00:08:27.857 CC lib/scsi/scsi_rpc.o 00:08:28.124 CC lib/ftl/ftl_band_ops.o 00:08:28.124 CC lib/nvmf/stubs.o 00:08:28.124 CC lib/scsi/task.o 00:08:28.124 CC lib/nvmf/mdns_server.o 00:08:28.124 CC lib/nvmf/rdma.o 00:08:28.124 CC lib/nvmf/auth.o 00:08:28.385 CC lib/ftl/ftl_writer.o 00:08:28.385 LIB libspdk_scsi.a 00:08:28.385 CC lib/ftl/ftl_rq.o 00:08:28.385 SO libspdk_scsi.so.9.0 00:08:28.385 CC lib/ftl/ftl_reloc.o 00:08:28.385 SYMLINK libspdk_scsi.so 00:08:28.385 CC lib/ftl/ftl_l2p_cache.o 00:08:28.643 CC lib/ftl/ftl_p2l.o 00:08:28.643 CC lib/ftl/ftl_p2l_log.o 00:08:28.643 CC lib/ftl/mngt/ftl_mngt.o 00:08:28.643 CC lib/iscsi/conn.o 00:08:28.643 CC lib/vhost/vhost.o 00:08:28.901 CC lib/vhost/vhost_rpc.o 00:08:28.901 CC lib/vhost/vhost_scsi.o 00:08:28.901 CC lib/vhost/vhost_blk.o 00:08:28.901 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:28.901 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:28.901 CC lib/iscsi/init_grp.o 00:08:29.160 CC lib/iscsi/iscsi.o 00:08:29.160 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:29.160 CC lib/iscsi/param.o 00:08:29.160 CC lib/iscsi/portal_grp.o 00:08:29.160 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:29.419 CC lib/vhost/rte_vhost_user.o 00:08:29.419 CC lib/iscsi/tgt_node.o 00:08:29.419 CC lib/iscsi/iscsi_subsystem.o 00:08:29.419 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:29.419 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:29.678 CC lib/iscsi/iscsi_rpc.o 00:08:29.678 CC lib/iscsi/task.o 00:08:29.678 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:29.678 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:29.678 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:29.678 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:29.937 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:29.937 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:29.937 CC lib/ftl/utils/ftl_conf.o 00:08:29.937 CC lib/ftl/utils/ftl_md.o 00:08:29.937 CC lib/ftl/utils/ftl_mempool.o 00:08:29.937 CC lib/ftl/utils/ftl_bitmap.o 00:08:30.197 CC lib/ftl/utils/ftl_property.o 00:08:30.197 LIB libspdk_nvmf.a 00:08:30.197 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:30.197 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:30.197 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:30.197 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:30.197 SO libspdk_nvmf.so.20.0 00:08:30.197 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:30.456 LIB libspdk_vhost.a 00:08:30.456 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:30.456 LIB libspdk_iscsi.a 00:08:30.456 SYMLINK libspdk_nvmf.so 00:08:30.456 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:30.456 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:30.456 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:30.456 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:30.456 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:30.456 SO libspdk_vhost.so.8.0 00:08:30.456 SO libspdk_iscsi.so.8.0 00:08:30.456 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:30.456 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:30.456 SYMLINK libspdk_vhost.so 00:08:30.456 CC lib/ftl/base/ftl_base_dev.o 00:08:30.456 CC lib/ftl/base/ftl_base_bdev.o 00:08:30.456 CC lib/ftl/ftl_trace.o 00:08:30.715 SYMLINK libspdk_iscsi.so 00:08:30.715 LIB libspdk_ftl.a 00:08:31.284 SO libspdk_ftl.so.9.0 00:08:31.284 SYMLINK libspdk_ftl.so 00:08:31.850 CC module/env_dpdk/env_dpdk_rpc.o 00:08:31.850 CC module/sock/uring/uring.o 00:08:31.850 CC module/accel/error/accel_error.o 00:08:31.850 CC module/sock/posix/posix.o 00:08:31.850 CC module/keyring/file/keyring.o 00:08:31.850 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:31.850 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:31.850 CC module/fsdev/aio/fsdev_aio.o 00:08:31.850 CC module/blob/bdev/blob_bdev.o 00:08:31.850 CC module/scheduler/gscheduler/gscheduler.o 00:08:31.850 LIB libspdk_env_dpdk_rpc.a 00:08:31.850 SO libspdk_env_dpdk_rpc.so.6.0 00:08:32.109 SYMLINK libspdk_env_dpdk_rpc.so 00:08:32.109 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:32.109 CC module/keyring/file/keyring_rpc.o 00:08:32.109 LIB libspdk_scheduler_dpdk_governor.a 00:08:32.109 LIB libspdk_scheduler_dynamic.a 00:08:32.109 LIB libspdk_scheduler_gscheduler.a 00:08:32.109 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:32.109 CC module/accel/error/accel_error_rpc.o 00:08:32.109 SO libspdk_scheduler_gscheduler.so.4.0 00:08:32.109 SO libspdk_scheduler_dynamic.so.4.0 00:08:32.109 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:32.109 SYMLINK libspdk_scheduler_gscheduler.so 00:08:32.109 SYMLINK libspdk_scheduler_dynamic.so 00:08:32.109 LIB libspdk_blob_bdev.a 00:08:32.109 CC module/fsdev/aio/linux_aio_mgr.o 00:08:32.109 LIB libspdk_keyring_file.a 00:08:32.109 SO libspdk_blob_bdev.so.12.0 00:08:32.109 SO libspdk_keyring_file.so.2.0 00:08:32.366 SYMLINK libspdk_blob_bdev.so 00:08:32.366 LIB libspdk_accel_error.a 00:08:32.366 SYMLINK libspdk_keyring_file.so 00:08:32.366 SO libspdk_accel_error.so.2.0 00:08:32.366 CC module/keyring/linux/keyring.o 00:08:32.366 CC module/accel/ioat/accel_ioat.o 00:08:32.367 CC module/accel/dsa/accel_dsa.o 00:08:32.367 CC module/keyring/linux/keyring_rpc.o 00:08:32.367 SYMLINK libspdk_accel_error.so 00:08:32.367 CC module/accel/iaa/accel_iaa.o 00:08:32.625 CC module/accel/iaa/accel_iaa_rpc.o 00:08:32.625 LIB libspdk_sock_uring.a 00:08:32.625 LIB libspdk_keyring_linux.a 00:08:32.625 CC module/accel/ioat/accel_ioat_rpc.o 00:08:32.625 LIB libspdk_fsdev_aio.a 00:08:32.625 SO libspdk_sock_uring.so.5.0 00:08:32.625 SO libspdk_keyring_linux.so.1.0 00:08:32.625 SO libspdk_fsdev_aio.so.1.0 00:08:32.625 SYMLINK libspdk_sock_uring.so 00:08:32.625 SYMLINK libspdk_keyring_linux.so 00:08:32.625 CC module/bdev/delay/vbdev_delay.o 00:08:32.625 LIB libspdk_sock_posix.a 00:08:32.625 CC module/accel/dsa/accel_dsa_rpc.o 00:08:32.625 SYMLINK libspdk_fsdev_aio.so 00:08:32.625 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:32.625 LIB libspdk_accel_iaa.a 00:08:32.625 SO libspdk_sock_posix.so.6.0 00:08:32.625 LIB libspdk_accel_ioat.a 00:08:32.625 SO libspdk_accel_iaa.so.3.0 00:08:32.625 SO libspdk_accel_ioat.so.6.0 00:08:32.625 CC module/blobfs/bdev/blobfs_bdev.o 00:08:32.885 SYMLINK libspdk_sock_posix.so 00:08:32.885 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:32.885 LIB libspdk_accel_dsa.a 00:08:32.885 SYMLINK libspdk_accel_ioat.so 00:08:32.885 SYMLINK libspdk_accel_iaa.so 00:08:32.885 CC module/bdev/error/vbdev_error.o 00:08:32.885 CC module/bdev/error/vbdev_error_rpc.o 00:08:32.885 CC module/bdev/lvol/vbdev_lvol.o 00:08:32.885 SO libspdk_accel_dsa.so.5.0 00:08:32.885 CC module/bdev/gpt/gpt.o 00:08:32.885 CC module/bdev/gpt/vbdev_gpt.o 00:08:32.885 SYMLINK libspdk_accel_dsa.so 00:08:32.885 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:32.885 LIB libspdk_blobfs_bdev.a 00:08:32.885 CC module/bdev/null/bdev_null.o 00:08:32.885 CC module/bdev/malloc/bdev_malloc.o 00:08:32.885 SO libspdk_blobfs_bdev.so.6.0 00:08:32.885 LIB libspdk_bdev_delay.a 00:08:33.145 CC module/bdev/null/bdev_null_rpc.o 00:08:33.145 SO libspdk_bdev_delay.so.6.0 00:08:33.145 SYMLINK libspdk_blobfs_bdev.so 00:08:33.145 SYMLINK libspdk_bdev_delay.so 00:08:33.145 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:33.145 LIB libspdk_bdev_error.a 00:08:33.145 LIB libspdk_bdev_gpt.a 00:08:33.145 SO libspdk_bdev_error.so.6.0 00:08:33.145 SO libspdk_bdev_gpt.so.6.0 00:08:33.145 LIB libspdk_bdev_null.a 00:08:33.404 SO libspdk_bdev_null.so.6.0 00:08:33.404 CC module/bdev/nvme/bdev_nvme.o 00:08:33.404 SYMLINK libspdk_bdev_gpt.so 00:08:33.404 SYMLINK libspdk_bdev_error.so 00:08:33.404 CC module/bdev/passthru/vbdev_passthru.o 00:08:33.404 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:33.404 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:33.404 CC module/bdev/nvme/nvme_rpc.o 00:08:33.404 LIB libspdk_bdev_lvol.a 00:08:33.404 LIB libspdk_bdev_malloc.a 00:08:33.404 SYMLINK libspdk_bdev_null.so 00:08:33.404 CC module/bdev/nvme/bdev_mdns_client.o 00:08:33.404 SO libspdk_bdev_lvol.so.6.0 00:08:33.404 SO libspdk_bdev_malloc.so.6.0 00:08:33.404 CC module/bdev/raid/bdev_raid.o 00:08:33.404 SYMLINK libspdk_bdev_lvol.so 00:08:33.404 CC module/bdev/raid/bdev_raid_rpc.o 00:08:33.404 CC module/bdev/raid/bdev_raid_sb.o 00:08:33.404 CC module/bdev/split/vbdev_split.o 00:08:33.404 SYMLINK libspdk_bdev_malloc.so 00:08:33.404 CC module/bdev/split/vbdev_split_rpc.o 00:08:33.404 CC module/bdev/nvme/vbdev_opal.o 00:08:33.663 LIB libspdk_bdev_passthru.a 00:08:33.663 SO libspdk_bdev_passthru.so.6.0 00:08:33.663 CC module/bdev/raid/raid0.o 00:08:33.663 LIB libspdk_bdev_split.a 00:08:33.663 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:33.663 SYMLINK libspdk_bdev_passthru.so 00:08:33.663 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:33.663 SO libspdk_bdev_split.so.6.0 00:08:33.922 SYMLINK libspdk_bdev_split.so 00:08:33.922 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:33.922 CC module/bdev/uring/bdev_uring.o 00:08:33.922 CC module/bdev/uring/bdev_uring_rpc.o 00:08:33.922 CC module/bdev/raid/raid1.o 00:08:33.922 CC module/bdev/aio/bdev_aio.o 00:08:33.922 CC module/bdev/ftl/bdev_ftl.o 00:08:34.181 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:34.181 LIB libspdk_bdev_zone_block.a 00:08:34.181 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:34.181 CC module/bdev/iscsi/bdev_iscsi.o 00:08:34.181 SO libspdk_bdev_zone_block.so.6.0 00:08:34.181 SYMLINK libspdk_bdev_zone_block.so 00:08:34.181 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:34.181 LIB libspdk_bdev_uring.a 00:08:34.181 CC module/bdev/aio/bdev_aio_rpc.o 00:08:34.181 SO libspdk_bdev_uring.so.6.0 00:08:34.181 CC module/bdev/raid/concat.o 00:08:34.181 LIB libspdk_bdev_ftl.a 00:08:34.440 SYMLINK libspdk_bdev_uring.so 00:08:34.440 SO libspdk_bdev_ftl.so.6.0 00:08:34.440 LIB libspdk_bdev_aio.a 00:08:34.440 SYMLINK libspdk_bdev_ftl.so 00:08:34.440 SO libspdk_bdev_aio.so.6.0 00:08:34.440 LIB libspdk_bdev_iscsi.a 00:08:34.440 LIB libspdk_bdev_raid.a 00:08:34.440 SO libspdk_bdev_iscsi.so.6.0 00:08:34.440 SYMLINK libspdk_bdev_aio.so 00:08:34.440 SO libspdk_bdev_raid.so.6.0 00:08:34.699 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:34.699 SYMLINK libspdk_bdev_iscsi.so 00:08:34.699 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:34.699 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:34.699 SYMLINK libspdk_bdev_raid.so 00:08:35.267 LIB libspdk_bdev_virtio.a 00:08:35.267 SO libspdk_bdev_virtio.so.6.0 00:08:35.267 SYMLINK libspdk_bdev_virtio.so 00:08:35.834 LIB libspdk_bdev_nvme.a 00:08:35.834 SO libspdk_bdev_nvme.so.7.1 00:08:36.094 SYMLINK libspdk_bdev_nvme.so 00:08:36.660 CC module/event/subsystems/scheduler/scheduler.o 00:08:36.660 CC module/event/subsystems/fsdev/fsdev.o 00:08:36.660 CC module/event/subsystems/keyring/keyring.o 00:08:36.660 CC module/event/subsystems/sock/sock.o 00:08:36.660 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:36.660 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:36.660 CC module/event/subsystems/iobuf/iobuf.o 00:08:36.660 CC module/event/subsystems/vmd/vmd.o 00:08:36.660 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:36.918 LIB libspdk_event_scheduler.a 00:08:36.918 LIB libspdk_event_keyring.a 00:08:36.918 LIB libspdk_event_fsdev.a 00:08:36.918 LIB libspdk_event_vhost_blk.a 00:08:36.918 LIB libspdk_event_sock.a 00:08:36.918 LIB libspdk_event_vmd.a 00:08:36.918 SO libspdk_event_scheduler.so.4.0 00:08:36.918 SO libspdk_event_keyring.so.1.0 00:08:36.918 SO libspdk_event_fsdev.so.1.0 00:08:36.918 SO libspdk_event_sock.so.5.0 00:08:36.918 SO libspdk_event_vhost_blk.so.3.0 00:08:36.918 LIB libspdk_event_iobuf.a 00:08:36.918 SO libspdk_event_vmd.so.6.0 00:08:36.918 SO libspdk_event_iobuf.so.3.0 00:08:36.918 SYMLINK libspdk_event_keyring.so 00:08:36.918 SYMLINK libspdk_event_fsdev.so 00:08:36.918 SYMLINK libspdk_event_sock.so 00:08:36.918 SYMLINK libspdk_event_scheduler.so 00:08:36.918 SYMLINK libspdk_event_vhost_blk.so 00:08:36.918 SYMLINK libspdk_event_vmd.so 00:08:36.918 SYMLINK libspdk_event_iobuf.so 00:08:37.485 CC module/event/subsystems/accel/accel.o 00:08:37.485 LIB libspdk_event_accel.a 00:08:37.485 SO libspdk_event_accel.so.6.0 00:08:37.743 SYMLINK libspdk_event_accel.so 00:08:38.009 CC module/event/subsystems/bdev/bdev.o 00:08:38.275 LIB libspdk_event_bdev.a 00:08:38.275 SO libspdk_event_bdev.so.6.0 00:08:38.275 SYMLINK libspdk_event_bdev.so 00:08:38.842 CC module/event/subsystems/ublk/ublk.o 00:08:38.842 CC module/event/subsystems/scsi/scsi.o 00:08:38.842 CC module/event/subsystems/nbd/nbd.o 00:08:38.842 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:38.842 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:38.842 LIB libspdk_event_ublk.a 00:08:38.842 SO libspdk_event_ublk.so.3.0 00:08:38.842 LIB libspdk_event_scsi.a 00:08:38.842 LIB libspdk_event_nbd.a 00:08:38.842 SO libspdk_event_scsi.so.6.0 00:08:39.100 SO libspdk_event_nbd.so.6.0 00:08:39.101 SYMLINK libspdk_event_ublk.so 00:08:39.101 SYMLINK libspdk_event_scsi.so 00:08:39.101 SYMLINK libspdk_event_nbd.so 00:08:39.101 LIB libspdk_event_nvmf.a 00:08:39.101 SO libspdk_event_nvmf.so.6.0 00:08:39.101 SYMLINK libspdk_event_nvmf.so 00:08:39.359 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:39.359 CC module/event/subsystems/iscsi/iscsi.o 00:08:39.618 LIB libspdk_event_vhost_scsi.a 00:08:39.618 SO libspdk_event_vhost_scsi.so.3.0 00:08:39.618 LIB libspdk_event_iscsi.a 00:08:39.618 SO libspdk_event_iscsi.so.6.0 00:08:39.618 SYMLINK libspdk_event_vhost_scsi.so 00:08:39.618 SYMLINK libspdk_event_iscsi.so 00:08:39.877 SO libspdk.so.6.0 00:08:39.877 SYMLINK libspdk.so 00:08:40.442 CXX app/trace/trace.o 00:08:40.443 CC app/spdk_lspci/spdk_lspci.o 00:08:40.443 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:40.443 CC app/trace_record/trace_record.o 00:08:40.443 CC app/iscsi_tgt/iscsi_tgt.o 00:08:40.443 CC app/nvmf_tgt/nvmf_main.o 00:08:40.443 CC app/spdk_tgt/spdk_tgt.o 00:08:40.443 CC examples/util/zipf/zipf.o 00:08:40.443 CC test/thread/poller_perf/poller_perf.o 00:08:40.443 CC examples/ioat/perf/perf.o 00:08:40.443 LINK spdk_lspci 00:08:40.443 LINK interrupt_tgt 00:08:40.443 LINK nvmf_tgt 00:08:40.443 LINK iscsi_tgt 00:08:40.443 LINK zipf 00:08:40.443 LINK poller_perf 00:08:40.443 LINK spdk_tgt 00:08:40.700 LINK ioat_perf 00:08:40.700 LINK spdk_trace_record 00:08:40.700 LINK spdk_trace 00:08:40.700 CC app/spdk_nvme_perf/perf.o 00:08:40.958 TEST_HEADER include/spdk/accel.h 00:08:40.958 TEST_HEADER include/spdk/accel_module.h 00:08:40.958 TEST_HEADER include/spdk/assert.h 00:08:40.958 TEST_HEADER include/spdk/barrier.h 00:08:40.958 CC examples/ioat/verify/verify.o 00:08:40.958 TEST_HEADER include/spdk/base64.h 00:08:40.958 TEST_HEADER include/spdk/bdev.h 00:08:40.958 TEST_HEADER include/spdk/bdev_module.h 00:08:40.958 CC app/spdk_nvme_discover/discovery_aer.o 00:08:40.958 TEST_HEADER include/spdk/bdev_zone.h 00:08:40.958 CC app/spdk_nvme_identify/identify.o 00:08:40.958 TEST_HEADER include/spdk/bit_array.h 00:08:40.958 TEST_HEADER include/spdk/bit_pool.h 00:08:40.958 TEST_HEADER include/spdk/blob_bdev.h 00:08:40.958 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:40.958 TEST_HEADER include/spdk/blobfs.h 00:08:40.958 TEST_HEADER include/spdk/blob.h 00:08:40.958 TEST_HEADER include/spdk/conf.h 00:08:40.958 TEST_HEADER include/spdk/config.h 00:08:40.958 TEST_HEADER include/spdk/cpuset.h 00:08:40.958 TEST_HEADER include/spdk/crc16.h 00:08:40.958 TEST_HEADER include/spdk/crc32.h 00:08:40.958 TEST_HEADER include/spdk/crc64.h 00:08:40.958 TEST_HEADER include/spdk/dif.h 00:08:40.958 TEST_HEADER include/spdk/dma.h 00:08:40.958 TEST_HEADER include/spdk/endian.h 00:08:40.958 TEST_HEADER include/spdk/env_dpdk.h 00:08:40.958 TEST_HEADER include/spdk/env.h 00:08:40.958 TEST_HEADER include/spdk/event.h 00:08:40.958 TEST_HEADER include/spdk/fd_group.h 00:08:40.958 TEST_HEADER include/spdk/fd.h 00:08:40.958 TEST_HEADER include/spdk/file.h 00:08:40.958 TEST_HEADER include/spdk/fsdev.h 00:08:40.958 TEST_HEADER include/spdk/fsdev_module.h 00:08:40.958 TEST_HEADER include/spdk/ftl.h 00:08:40.958 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:40.958 TEST_HEADER include/spdk/gpt_spec.h 00:08:40.958 CC test/app/bdev_svc/bdev_svc.o 00:08:40.958 TEST_HEADER include/spdk/hexlify.h 00:08:40.958 CC app/spdk_top/spdk_top.o 00:08:40.958 TEST_HEADER include/spdk/histogram_data.h 00:08:40.958 TEST_HEADER include/spdk/idxd.h 00:08:40.958 CC test/dma/test_dma/test_dma.o 00:08:40.958 TEST_HEADER include/spdk/idxd_spec.h 00:08:40.958 TEST_HEADER include/spdk/init.h 00:08:40.958 TEST_HEADER include/spdk/ioat.h 00:08:40.958 TEST_HEADER include/spdk/ioat_spec.h 00:08:40.958 TEST_HEADER include/spdk/iscsi_spec.h 00:08:40.958 TEST_HEADER include/spdk/json.h 00:08:40.958 TEST_HEADER include/spdk/jsonrpc.h 00:08:40.958 TEST_HEADER include/spdk/keyring.h 00:08:40.958 TEST_HEADER include/spdk/keyring_module.h 00:08:40.958 TEST_HEADER include/spdk/likely.h 00:08:40.958 TEST_HEADER include/spdk/log.h 00:08:40.958 TEST_HEADER include/spdk/lvol.h 00:08:40.958 TEST_HEADER include/spdk/md5.h 00:08:40.958 TEST_HEADER include/spdk/memory.h 00:08:40.958 TEST_HEADER include/spdk/mmio.h 00:08:40.958 TEST_HEADER include/spdk/nbd.h 00:08:40.958 TEST_HEADER include/spdk/net.h 00:08:40.958 TEST_HEADER include/spdk/notify.h 00:08:40.958 TEST_HEADER include/spdk/nvme.h 00:08:40.958 TEST_HEADER include/spdk/nvme_intel.h 00:08:40.958 CC examples/thread/thread/thread_ex.o 00:08:40.958 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:40.958 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:40.958 TEST_HEADER include/spdk/nvme_spec.h 00:08:40.958 TEST_HEADER include/spdk/nvme_zns.h 00:08:40.958 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:40.958 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:40.958 TEST_HEADER include/spdk/nvmf.h 00:08:41.215 TEST_HEADER include/spdk/nvmf_spec.h 00:08:41.215 TEST_HEADER include/spdk/nvmf_transport.h 00:08:41.215 TEST_HEADER include/spdk/opal.h 00:08:41.215 TEST_HEADER include/spdk/opal_spec.h 00:08:41.215 TEST_HEADER include/spdk/pci_ids.h 00:08:41.215 TEST_HEADER include/spdk/pipe.h 00:08:41.215 TEST_HEADER include/spdk/queue.h 00:08:41.215 TEST_HEADER include/spdk/reduce.h 00:08:41.215 TEST_HEADER include/spdk/rpc.h 00:08:41.215 TEST_HEADER include/spdk/scheduler.h 00:08:41.216 TEST_HEADER include/spdk/scsi.h 00:08:41.216 TEST_HEADER include/spdk/scsi_spec.h 00:08:41.216 TEST_HEADER include/spdk/sock.h 00:08:41.216 LINK spdk_nvme_discover 00:08:41.216 TEST_HEADER include/spdk/stdinc.h 00:08:41.216 TEST_HEADER include/spdk/string.h 00:08:41.216 LINK verify 00:08:41.216 TEST_HEADER include/spdk/thread.h 00:08:41.216 TEST_HEADER include/spdk/trace.h 00:08:41.216 TEST_HEADER include/spdk/trace_parser.h 00:08:41.216 TEST_HEADER include/spdk/tree.h 00:08:41.216 TEST_HEADER include/spdk/ublk.h 00:08:41.216 TEST_HEADER include/spdk/util.h 00:08:41.216 TEST_HEADER include/spdk/uuid.h 00:08:41.216 TEST_HEADER include/spdk/version.h 00:08:41.216 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:41.216 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:41.216 TEST_HEADER include/spdk/vhost.h 00:08:41.216 TEST_HEADER include/spdk/vmd.h 00:08:41.216 TEST_HEADER include/spdk/xor.h 00:08:41.216 TEST_HEADER include/spdk/zipf.h 00:08:41.216 CXX test/cpp_headers/accel.o 00:08:41.216 CC examples/sock/hello_world/hello_sock.o 00:08:41.216 LINK bdev_svc 00:08:41.216 CXX test/cpp_headers/accel_module.o 00:08:41.216 LINK thread 00:08:41.473 LINK hello_sock 00:08:41.473 CXX test/cpp_headers/assert.o 00:08:41.473 LINK test_dma 00:08:41.473 CC app/spdk_dd/spdk_dd.o 00:08:41.730 CC app/fio/nvme/fio_plugin.o 00:08:41.730 LINK spdk_nvme_perf 00:08:41.730 CXX test/cpp_headers/barrier.o 00:08:41.730 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:41.730 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:41.730 LINK spdk_nvme_identify 00:08:41.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:41.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:41.730 CXX test/cpp_headers/base64.o 00:08:41.730 CC examples/vmd/lsvmd/lsvmd.o 00:08:41.730 LINK spdk_top 00:08:41.988 LINK lsvmd 00:08:41.988 LINK spdk_dd 00:08:41.988 CXX test/cpp_headers/bdev.o 00:08:41.988 CC app/fio/bdev/fio_plugin.o 00:08:41.988 CC examples/vmd/led/led.o 00:08:41.988 LINK nvme_fuzz 00:08:42.246 LINK spdk_nvme 00:08:42.246 CC app/vhost/vhost.o 00:08:42.246 LINK led 00:08:42.246 CXX test/cpp_headers/bdev_module.o 00:08:42.246 LINK vhost_fuzz 00:08:42.246 CXX test/cpp_headers/bdev_zone.o 00:08:42.246 LINK vhost 00:08:42.504 CC test/event/event_perf/event_perf.o 00:08:42.504 CC examples/idxd/perf/perf.o 00:08:42.504 CXX test/cpp_headers/bit_array.o 00:08:42.504 CC test/env/mem_callbacks/mem_callbacks.o 00:08:42.504 CC test/event/reactor/reactor.o 00:08:42.504 CC test/env/vtophys/vtophys.o 00:08:42.504 LINK spdk_bdev 00:08:42.504 LINK event_perf 00:08:42.504 CXX test/cpp_headers/bit_pool.o 00:08:42.504 CC test/nvme/aer/aer.o 00:08:42.504 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:42.504 LINK reactor 00:08:42.762 LINK vtophys 00:08:42.762 CC test/app/histogram_perf/histogram_perf.o 00:08:42.762 LINK idxd_perf 00:08:42.762 CXX test/cpp_headers/blob_bdev.o 00:08:42.762 LINK env_dpdk_post_init 00:08:42.762 CC test/app/jsoncat/jsoncat.o 00:08:42.762 LINK histogram_perf 00:08:42.762 CC test/event/reactor_perf/reactor_perf.o 00:08:42.762 LINK aer 00:08:43.020 CC test/event/app_repeat/app_repeat.o 00:08:43.020 CXX test/cpp_headers/blobfs_bdev.o 00:08:43.020 LINK jsoncat 00:08:43.020 LINK mem_callbacks 00:08:43.020 CXX test/cpp_headers/blobfs.o 00:08:43.020 LINK reactor_perf 00:08:43.020 CXX test/cpp_headers/blob.o 00:08:43.020 LINK app_repeat 00:08:43.020 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:43.020 CXX test/cpp_headers/conf.o 00:08:43.278 CC test/nvme/reset/reset.o 00:08:43.278 CC test/env/memory/memory_ut.o 00:08:43.278 CC test/nvme/sgl/sgl.o 00:08:43.278 CC test/nvme/e2edp/nvme_dp.o 00:08:43.278 CXX test/cpp_headers/config.o 00:08:43.278 CC test/nvme/overhead/overhead.o 00:08:43.278 CC test/nvme/err_injection/err_injection.o 00:08:43.278 CXX test/cpp_headers/cpuset.o 00:08:43.278 LINK hello_fsdev 00:08:43.278 CC test/event/scheduler/scheduler.o 00:08:43.536 LINK iscsi_fuzz 00:08:43.536 LINK reset 00:08:43.536 CXX test/cpp_headers/crc16.o 00:08:43.536 LINK sgl 00:08:43.536 LINK err_injection 00:08:43.536 LINK nvme_dp 00:08:43.536 LINK overhead 00:08:43.536 LINK scheduler 00:08:43.536 CXX test/cpp_headers/crc32.o 00:08:43.794 CC test/env/pci/pci_ut.o 00:08:43.794 CC test/app/stub/stub.o 00:08:43.794 CC examples/accel/perf/accel_perf.o 00:08:43.794 CC test/nvme/startup/startup.o 00:08:43.794 CC test/nvme/reserve/reserve.o 00:08:43.794 CXX test/cpp_headers/crc64.o 00:08:43.794 CC test/nvme/simple_copy/simple_copy.o 00:08:43.794 CC examples/blob/hello_world/hello_blob.o 00:08:43.794 CC test/rpc_client/rpc_client_test.o 00:08:43.794 LINK stub 00:08:44.052 LINK startup 00:08:44.052 LINK reserve 00:08:44.052 CXX test/cpp_headers/dif.o 00:08:44.052 LINK simple_copy 00:08:44.052 LINK rpc_client_test 00:08:44.052 LINK pci_ut 00:08:44.052 LINK hello_blob 00:08:44.052 CXX test/cpp_headers/dma.o 00:08:44.309 CC test/nvme/connect_stress/connect_stress.o 00:08:44.309 LINK accel_perf 00:08:44.309 CC examples/blob/cli/blobcli.o 00:08:44.309 CC test/nvme/boot_partition/boot_partition.o 00:08:44.309 LINK memory_ut 00:08:44.309 CXX test/cpp_headers/endian.o 00:08:44.309 LINK boot_partition 00:08:44.309 CXX test/cpp_headers/env_dpdk.o 00:08:44.309 LINK connect_stress 00:08:44.309 CXX test/cpp_headers/env.o 00:08:44.567 CC examples/nvme/hello_world/hello_world.o 00:08:44.567 CC test/accel/dif/dif.o 00:08:44.567 CXX test/cpp_headers/event.o 00:08:44.567 CC test/blobfs/mkfs/mkfs.o 00:08:44.567 CC test/nvme/compliance/nvme_compliance.o 00:08:44.567 LINK hello_world 00:08:44.567 CC test/nvme/fused_ordering/fused_ordering.o 00:08:44.825 LINK blobcli 00:08:44.825 CC examples/nvme/reconnect/reconnect.o 00:08:44.825 CC test/lvol/esnap/esnap.o 00:08:44.825 CXX test/cpp_headers/fd_group.o 00:08:44.825 CC examples/bdev/hello_world/hello_bdev.o 00:08:44.825 LINK mkfs 00:08:44.825 CXX test/cpp_headers/fd.o 00:08:44.825 CXX test/cpp_headers/file.o 00:08:44.825 LINK fused_ordering 00:08:45.083 LINK nvme_compliance 00:08:45.083 LINK hello_bdev 00:08:45.083 LINK reconnect 00:08:45.083 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:45.083 CXX test/cpp_headers/fsdev.o 00:08:45.083 LINK dif 00:08:45.083 CC examples/nvme/arbitration/arbitration.o 00:08:45.342 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:45.342 CC test/nvme/fdp/fdp.o 00:08:45.342 CXX test/cpp_headers/fsdev_module.o 00:08:45.342 CC test/nvme/cuse/cuse.o 00:08:45.342 CXX test/cpp_headers/ftl.o 00:08:45.342 CC examples/nvme/hotplug/hotplug.o 00:08:45.342 LINK doorbell_aers 00:08:45.342 CC examples/bdev/bdevperf/bdevperf.o 00:08:45.600 LINK arbitration 00:08:45.600 CXX test/cpp_headers/fuse_dispatcher.o 00:08:45.600 LINK fdp 00:08:45.600 LINK nvme_manage 00:08:45.600 LINK hotplug 00:08:45.600 CC test/bdev/bdevio/bdevio.o 00:08:45.858 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:45.858 CXX test/cpp_headers/gpt_spec.o 00:08:45.858 CXX test/cpp_headers/hexlify.o 00:08:45.858 CC examples/nvme/abort/abort.o 00:08:45.858 CXX test/cpp_headers/histogram_data.o 00:08:45.858 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:45.858 LINK cmb_copy 00:08:45.858 CXX test/cpp_headers/idxd.o 00:08:45.858 CXX test/cpp_headers/idxd_spec.o 00:08:46.116 CXX test/cpp_headers/init.o 00:08:46.116 LINK bdevio 00:08:46.116 LINK pmr_persistence 00:08:46.116 CXX test/cpp_headers/ioat.o 00:08:46.116 CXX test/cpp_headers/ioat_spec.o 00:08:46.116 LINK abort 00:08:46.116 CXX test/cpp_headers/iscsi_spec.o 00:08:46.116 CXX test/cpp_headers/json.o 00:08:46.116 LINK bdevperf 00:08:46.428 CXX test/cpp_headers/jsonrpc.o 00:08:46.428 CXX test/cpp_headers/keyring.o 00:08:46.428 CXX test/cpp_headers/keyring_module.o 00:08:46.428 CXX test/cpp_headers/likely.o 00:08:46.428 CXX test/cpp_headers/log.o 00:08:46.428 CXX test/cpp_headers/lvol.o 00:08:46.428 CXX test/cpp_headers/md5.o 00:08:46.428 CXX test/cpp_headers/memory.o 00:08:46.428 CXX test/cpp_headers/mmio.o 00:08:46.428 CXX test/cpp_headers/nbd.o 00:08:46.428 CXX test/cpp_headers/net.o 00:08:46.428 CXX test/cpp_headers/notify.o 00:08:46.428 CXX test/cpp_headers/nvme.o 00:08:46.428 CXX test/cpp_headers/nvme_intel.o 00:08:46.703 CXX test/cpp_headers/nvme_ocssd.o 00:08:46.703 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:46.703 CXX test/cpp_headers/nvme_spec.o 00:08:46.703 LINK cuse 00:08:46.703 CXX test/cpp_headers/nvmf_cmd.o 00:08:46.703 CXX test/cpp_headers/nvme_zns.o 00:08:46.703 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:46.703 CC examples/nvmf/nvmf/nvmf.o 00:08:46.703 CXX test/cpp_headers/nvmf.o 00:08:46.703 CXX test/cpp_headers/nvmf_spec.o 00:08:46.960 CXX test/cpp_headers/nvmf_transport.o 00:08:46.960 CXX test/cpp_headers/opal.o 00:08:46.960 CXX test/cpp_headers/opal_spec.o 00:08:46.960 CXX test/cpp_headers/pci_ids.o 00:08:46.960 CXX test/cpp_headers/pipe.o 00:08:46.960 CXX test/cpp_headers/queue.o 00:08:46.960 CXX test/cpp_headers/reduce.o 00:08:46.960 CXX test/cpp_headers/rpc.o 00:08:46.960 CXX test/cpp_headers/scheduler.o 00:08:46.960 LINK nvmf 00:08:46.960 CXX test/cpp_headers/scsi.o 00:08:46.960 CXX test/cpp_headers/scsi_spec.o 00:08:46.960 CXX test/cpp_headers/sock.o 00:08:46.960 CXX test/cpp_headers/stdinc.o 00:08:46.960 CXX test/cpp_headers/string.o 00:08:47.217 CXX test/cpp_headers/thread.o 00:08:47.217 CXX test/cpp_headers/trace.o 00:08:47.217 CXX test/cpp_headers/trace_parser.o 00:08:47.217 CXX test/cpp_headers/tree.o 00:08:47.217 CXX test/cpp_headers/ublk.o 00:08:47.217 CXX test/cpp_headers/util.o 00:08:47.217 CXX test/cpp_headers/uuid.o 00:08:47.217 CXX test/cpp_headers/version.o 00:08:47.217 CXX test/cpp_headers/vfio_user_pci.o 00:08:47.217 CXX test/cpp_headers/vfio_user_spec.o 00:08:47.217 CXX test/cpp_headers/vhost.o 00:08:47.217 CXX test/cpp_headers/vmd.o 00:08:47.217 CXX test/cpp_headers/xor.o 00:08:47.476 CXX test/cpp_headers/zipf.o 00:08:50.005 LINK esnap 00:08:50.005 00:08:50.005 real 1m29.224s 00:08:50.005 user 7m37.960s 00:08:50.005 sys 1m53.995s 00:08:50.005 10:52:17 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:50.005 ************************************ 00:08:50.005 END TEST make 00:08:50.005 ************************************ 00:08:50.005 10:52:17 make -- common/autotest_common.sh@10 -- $ set +x 00:08:50.264 10:52:17 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:50.264 10:52:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:50.264 10:52:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:50.264 10:52:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:50.264 10:52:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:50.264 10:52:17 -- pm/common@44 -- $ pid=5249 00:08:50.264 10:52:17 -- pm/common@50 -- $ kill -TERM 5249 00:08:50.264 10:52:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:50.264 10:52:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:50.264 10:52:17 -- pm/common@44 -- $ pid=5251 00:08:50.264 10:52:17 -- pm/common@50 -- $ kill -TERM 5251 00:08:50.264 10:52:17 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:50.264 10:52:17 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:50.264 10:52:17 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.264 10:52:17 -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.264 10:52:17 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.264 10:52:17 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.264 10:52:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.264 10:52:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.264 10:52:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.264 10:52:17 -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.264 10:52:17 -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.264 10:52:17 -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.264 10:52:17 -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.264 10:52:17 -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.264 10:52:17 -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.264 10:52:17 -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.264 10:52:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.264 10:52:17 -- scripts/common.sh@344 -- # case "$op" in 00:08:50.264 10:52:17 -- scripts/common.sh@345 -- # : 1 00:08:50.264 10:52:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.264 10:52:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.264 10:52:17 -- scripts/common.sh@365 -- # decimal 1 00:08:50.264 10:52:17 -- scripts/common.sh@353 -- # local d=1 00:08:50.264 10:52:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.264 10:52:17 -- scripts/common.sh@355 -- # echo 1 00:08:50.264 10:52:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.264 10:52:17 -- scripts/common.sh@366 -- # decimal 2 00:08:50.264 10:52:17 -- scripts/common.sh@353 -- # local d=2 00:08:50.264 10:52:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.264 10:52:17 -- scripts/common.sh@355 -- # echo 2 00:08:50.264 10:52:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.264 10:52:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.264 10:52:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.264 10:52:17 -- scripts/common.sh@368 -- # return 0 00:08:50.264 10:52:17 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.264 10:52:17 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.264 --rc genhtml_branch_coverage=1 00:08:50.264 --rc genhtml_function_coverage=1 00:08:50.264 --rc genhtml_legend=1 00:08:50.264 --rc geninfo_all_blocks=1 00:08:50.264 --rc geninfo_unexecuted_blocks=1 00:08:50.264 00:08:50.264 ' 00:08:50.264 10:52:17 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.264 --rc genhtml_branch_coverage=1 00:08:50.264 --rc genhtml_function_coverage=1 00:08:50.264 --rc genhtml_legend=1 00:08:50.264 --rc geninfo_all_blocks=1 00:08:50.264 --rc geninfo_unexecuted_blocks=1 00:08:50.264 00:08:50.264 ' 00:08:50.264 10:52:17 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.264 --rc genhtml_branch_coverage=1 00:08:50.264 --rc genhtml_function_coverage=1 00:08:50.264 --rc genhtml_legend=1 00:08:50.264 --rc geninfo_all_blocks=1 00:08:50.264 --rc geninfo_unexecuted_blocks=1 00:08:50.264 00:08:50.264 ' 00:08:50.264 10:52:17 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.264 --rc genhtml_branch_coverage=1 00:08:50.264 --rc genhtml_function_coverage=1 00:08:50.264 --rc genhtml_legend=1 00:08:50.264 --rc geninfo_all_blocks=1 00:08:50.264 --rc geninfo_unexecuted_blocks=1 00:08:50.264 00:08:50.264 ' 00:08:50.264 10:52:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.264 10:52:17 -- nvmf/common.sh@7 -- # uname -s 00:08:50.264 10:52:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.264 10:52:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.264 10:52:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.264 10:52:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.264 10:52:17 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.264 10:52:17 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:50.264 10:52:17 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.264 10:52:17 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:50.523 10:52:17 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:08:50.523 10:52:17 -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:08:50.523 10:52:17 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.523 10:52:17 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:50.523 10:52:17 -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:08:50.524 10:52:17 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.524 10:52:17 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.524 10:52:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.524 10:52:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.524 10:52:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.524 10:52:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.524 10:52:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.524 10:52:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.524 10:52:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.524 10:52:17 -- paths/export.sh@5 -- # export PATH 00:08:50.524 10:52:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.524 10:52:17 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:08:50.524 10:52:17 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:50.524 10:52:17 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:50.524 10:52:17 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:50.524 10:52:17 -- nvmf/common.sh@50 -- # : 0 00:08:50.524 10:52:17 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:50.524 10:52:17 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:50.524 10:52:17 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:50.524 10:52:17 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.524 10:52:17 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.524 10:52:17 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:50.524 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:50.524 10:52:17 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:50.524 10:52:17 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:50.524 10:52:17 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:50.524 10:52:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:50.524 10:52:17 -- spdk/autotest.sh@32 -- # uname -s 00:08:50.524 10:52:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:50.524 10:52:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:50.524 10:52:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:50.524 10:52:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:50.524 10:52:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:50.524 10:52:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:50.524 10:52:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:50.524 10:52:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:50.524 10:52:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:50.524 10:52:17 -- spdk/autotest.sh@48 -- # udevadm_pid=54386 00:08:50.524 10:52:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:50.524 10:52:17 -- pm/common@17 -- # local monitor 00:08:50.524 10:52:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:50.524 10:52:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:50.524 10:52:17 -- pm/common@21 -- # date +%s 00:08:50.524 10:52:17 -- pm/common@25 -- # sleep 1 00:08:50.524 10:52:17 -- pm/common@21 -- # date +%s 00:08:50.524 10:52:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733395937 00:08:50.524 10:52:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733395937 00:08:50.524 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733395937_collect-cpu-load.pm.log 00:08:50.524 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733395937_collect-vmstat.pm.log 00:08:51.462 10:52:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:51.462 10:52:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:51.462 10:52:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.462 10:52:18 -- common/autotest_common.sh@10 -- # set +x 00:08:51.462 10:52:18 -- spdk/autotest.sh@59 -- # create_test_list 00:08:51.462 10:52:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:51.462 10:52:18 -- common/autotest_common.sh@10 -- # set +x 00:08:51.462 10:52:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:51.462 10:52:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:51.720 10:52:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:51.720 10:52:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:51.720 10:52:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:51.720 10:52:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:51.720 10:52:18 -- common/autotest_common.sh@1457 -- # uname 00:08:51.720 10:52:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:51.720 10:52:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:51.720 10:52:18 -- common/autotest_common.sh@1477 -- # uname 00:08:51.720 10:52:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:51.720 10:52:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:51.720 10:52:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:51.720 lcov: LCOV version 1.15 00:08:51.720 10:52:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:09.812 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:09.812 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:24.702 10:52:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:24.702 10:52:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.702 10:52:50 -- common/autotest_common.sh@10 -- # set +x 00:09:24.702 10:52:50 -- spdk/autotest.sh@78 -- # rm -f 00:09:24.702 10:52:50 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.702 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:24.702 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:24.702 10:52:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:24.702 10:52:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:24.702 10:52:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:24.702 10:52:51 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:09:24.702 10:52:51 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:09:24.702 10:52:51 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:09:24.702 10:52:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:24.702 10:52:51 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:09:24.702 10:52:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:24.702 10:52:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:09:24.702 10:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:24.702 10:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:24.702 10:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:24.702 10:52:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:09:24.702 10:52:51 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:09:24.702 10:52:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:24.702 10:52:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:09:24.702 10:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:24.702 10:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:24.702 10:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:24.702 10:52:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:24.702 10:52:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:09:24.702 10:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:09:24.702 10:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:24.702 10:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:24.702 10:52:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:09:24.702 10:52:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:09:24.702 10:52:51 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:09:24.702 10:52:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:24.702 10:52:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:24.702 10:52:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:24.702 10:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:24.702 10:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:24.702 10:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:24.702 10:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:24.702 10:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:24.702 No valid GPT data, bailing 00:09:24.702 10:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:24.702 10:52:51 -- scripts/common.sh@394 -- # pt= 00:09:24.702 10:52:51 -- scripts/common.sh@395 -- # return 1 00:09:24.702 10:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:24.702 1+0 records in 00:09:24.702 1+0 records out 00:09:24.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614741 s, 171 MB/s 00:09:24.702 10:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:24.702 10:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:24.702 10:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:24.702 10:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:24.702 10:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:24.702 No valid GPT data, bailing 00:09:24.702 10:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:24.702 10:52:51 -- scripts/common.sh@394 -- # pt= 00:09:24.702 10:52:51 -- scripts/common.sh@395 -- # return 1 00:09:24.702 10:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:24.702 1+0 records in 00:09:24.702 1+0 records out 00:09:24.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00412691 s, 254 MB/s 00:09:24.702 10:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:24.702 10:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:24.702 10:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:09:24.702 10:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:09:24.702 10:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:09:24.702 No valid GPT data, bailing 00:09:24.702 10:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:09:24.702 10:52:51 -- scripts/common.sh@394 -- # pt= 00:09:24.702 10:52:51 -- scripts/common.sh@395 -- # return 1 00:09:24.702 10:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:09:24.702 1+0 records in 00:09:24.702 1+0 records out 00:09:24.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0037996 s, 276 MB/s 00:09:24.702 10:52:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:24.702 10:52:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:24.702 10:52:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:09:24.702 10:52:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:09:24.702 10:52:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:09:24.702 No valid GPT data, bailing 00:09:24.702 10:52:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:09:24.702 10:52:51 -- scripts/common.sh@394 -- # pt= 00:09:24.702 10:52:51 -- scripts/common.sh@395 -- # return 1 00:09:24.702 10:52:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:09:24.702 1+0 records in 00:09:24.702 1+0 records out 00:09:24.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00578797 s, 181 MB/s 00:09:24.702 10:52:51 -- spdk/autotest.sh@105 -- # sync 00:09:24.702 10:52:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:24.702 10:52:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:24.702 10:52:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:28.031 10:52:54 -- spdk/autotest.sh@111 -- # uname -s 00:09:28.031 10:52:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:28.031 10:52:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:28.031 10:52:54 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:28.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.290 Hugepages 00:09:28.290 node hugesize free / total 00:09:28.290 node0 1048576kB 0 / 0 00:09:28.290 node0 2048kB 0 / 0 00:09:28.290 00:09:28.290 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:28.549 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:28.549 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:28.808 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:28.808 10:52:55 -- spdk/autotest.sh@117 -- # uname -s 00:09:28.808 10:52:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:28.808 10:52:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:28.808 10:52:55 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:29.745 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.745 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.745 10:52:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:31.124 10:52:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:31.124 10:52:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:31.124 10:52:57 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:31.124 10:52:57 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:31.124 10:52:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:31.124 10:52:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:31.124 10:52:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.124 10:52:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:31.124 10:52:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:31.124 10:52:57 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:09:31.124 10:52:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:31.124 10:52:57 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.383 Waiting for block devices as requested 00:09:31.642 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.642 10:52:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:31.643 10:52:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:31.643 10:52:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:31.643 10:52:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:09:31.643 10:52:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:31.643 10:52:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:31.643 10:52:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:31.643 10:52:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:09:31.643 10:52:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:09:31.643 10:52:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:09:31.643 10:52:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:09:31.643 10:52:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:31.643 10:52:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:31.903 10:52:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:31.903 10:52:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:31.903 10:52:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:31.903 10:52:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:09:31.903 10:52:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:31.903 10:52:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:31.903 10:52:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:31.903 10:52:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:31.903 10:52:58 -- common/autotest_common.sh@1543 -- # continue 00:09:31.903 10:52:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:31.903 10:52:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:31.903 10:52:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:31.903 10:52:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:09:31.903 10:52:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:31.903 10:52:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:31.903 10:52:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:31.903 10:52:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:31.903 10:52:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:31.903 10:52:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:31.903 10:52:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:31.903 10:52:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:31.903 10:52:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:31.903 10:52:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:09:31.903 10:52:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:31.903 10:52:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:31.903 10:52:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:31.903 10:52:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:31.903 10:52:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:31.903 10:52:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:31.903 10:52:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:31.903 10:52:58 -- common/autotest_common.sh@1543 -- # continue 00:09:31.903 10:52:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:31.903 10:52:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.903 10:52:58 -- common/autotest_common.sh@10 -- # set +x 00:09:31.903 10:52:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:31.903 10:52:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.903 10:52:58 -- common/autotest_common.sh@10 -- # set +x 00:09:31.903 10:52:58 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.840 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:32.840 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.099 10:53:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:33.099 10:53:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.099 10:53:00 -- common/autotest_common.sh@10 -- # set +x 00:09:33.099 10:53:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:33.099 10:53:00 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:33.099 10:53:00 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:33.099 10:53:00 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:33.099 10:53:00 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:33.099 10:53:00 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:33.099 10:53:00 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:33.099 10:53:00 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:33.099 10:53:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:33.099 10:53:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:33.099 10:53:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:33.099 10:53:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:33.099 10:53:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:33.099 10:53:00 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:09:33.099 10:53:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:33.099 10:53:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:33.099 10:53:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:33.099 10:53:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:33.099 10:53:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:33.099 10:53:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:33.099 10:53:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:33.099 10:53:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:09:33.099 10:53:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:33.099 10:53:00 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:09:33.099 10:53:00 -- common/autotest_common.sh@1572 -- # return 0 00:09:33.099 10:53:00 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:09:33.099 10:53:00 -- common/autotest_common.sh@1580 -- # return 0 00:09:33.099 10:53:00 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:33.099 10:53:00 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:33.099 10:53:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:33.099 10:53:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:33.099 10:53:00 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:33.099 10:53:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.099 10:53:00 -- common/autotest_common.sh@10 -- # set +x 00:09:33.099 10:53:00 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:09:33.099 10:53:00 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:09:33.099 10:53:00 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:09:33.099 10:53:00 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:33.100 10:53:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.100 10:53:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.100 10:53:00 -- common/autotest_common.sh@10 -- # set +x 00:09:33.100 ************************************ 00:09:33.100 START TEST env 00:09:33.100 ************************************ 00:09:33.100 10:53:00 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:33.358 * Looking for test storage... 00:09:33.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:33.358 10:53:00 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.358 10:53:00 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.358 10:53:00 env -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.358 10:53:00 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.358 10:53:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.358 10:53:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.358 10:53:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.358 10:53:00 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.358 10:53:00 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.358 10:53:00 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.358 10:53:00 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.358 10:53:00 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.358 10:53:00 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.358 10:53:00 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.358 10:53:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.358 10:53:00 env -- scripts/common.sh@344 -- # case "$op" in 00:09:33.358 10:53:00 env -- scripts/common.sh@345 -- # : 1 00:09:33.358 10:53:00 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.358 10:53:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.358 10:53:00 env -- scripts/common.sh@365 -- # decimal 1 00:09:33.358 10:53:00 env -- scripts/common.sh@353 -- # local d=1 00:09:33.358 10:53:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.358 10:53:00 env -- scripts/common.sh@355 -- # echo 1 00:09:33.358 10:53:00 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.358 10:53:00 env -- scripts/common.sh@366 -- # decimal 2 00:09:33.358 10:53:00 env -- scripts/common.sh@353 -- # local d=2 00:09:33.358 10:53:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.358 10:53:00 env -- scripts/common.sh@355 -- # echo 2 00:09:33.358 10:53:00 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.358 10:53:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.359 10:53:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.359 10:53:00 env -- scripts/common.sh@368 -- # return 0 00:09:33.359 10:53:00 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.359 10:53:00 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.359 --rc genhtml_branch_coverage=1 00:09:33.359 --rc genhtml_function_coverage=1 00:09:33.359 --rc genhtml_legend=1 00:09:33.359 --rc geninfo_all_blocks=1 00:09:33.359 --rc geninfo_unexecuted_blocks=1 00:09:33.359 00:09:33.359 ' 00:09:33.359 10:53:00 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.359 --rc genhtml_branch_coverage=1 00:09:33.359 --rc genhtml_function_coverage=1 00:09:33.359 --rc genhtml_legend=1 00:09:33.359 --rc geninfo_all_blocks=1 00:09:33.359 --rc geninfo_unexecuted_blocks=1 00:09:33.359 00:09:33.359 ' 00:09:33.359 10:53:00 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.359 --rc genhtml_branch_coverage=1 00:09:33.359 --rc genhtml_function_coverage=1 00:09:33.359 --rc genhtml_legend=1 00:09:33.359 --rc geninfo_all_blocks=1 00:09:33.359 --rc geninfo_unexecuted_blocks=1 00:09:33.359 00:09:33.359 ' 00:09:33.359 10:53:00 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.359 --rc genhtml_branch_coverage=1 00:09:33.359 --rc genhtml_function_coverage=1 00:09:33.359 --rc genhtml_legend=1 00:09:33.359 --rc geninfo_all_blocks=1 00:09:33.359 --rc geninfo_unexecuted_blocks=1 00:09:33.359 00:09:33.359 ' 00:09:33.359 10:53:00 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:33.359 10:53:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.359 10:53:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.359 10:53:00 env -- common/autotest_common.sh@10 -- # set +x 00:09:33.359 ************************************ 00:09:33.359 START TEST env_memory 00:09:33.359 ************************************ 00:09:33.359 10:53:00 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:33.359 00:09:33.359 00:09:33.359 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.359 http://cunit.sourceforge.net/ 00:09:33.359 00:09:33.359 00:09:33.359 Suite: memory 00:09:33.618 Test: alloc and free memory map ...[2024-12-05 10:53:00.544866] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:33.618 passed 00:09:33.618 Test: mem map translation ...[2024-12-05 10:53:00.565609] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:33.618 [2024-12-05 10:53:00.565638] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:33.618 [2024-12-05 10:53:00.565675] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:33.618 [2024-12-05 10:53:00.565682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:33.618 passed 00:09:33.618 Test: mem map registration ...[2024-12-05 10:53:00.603721] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:33.618 [2024-12-05 10:53:00.603906] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:33.618 passed 00:09:33.618 Test: mem map adjacent registrations ...passed 00:09:33.618 00:09:33.618 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.618 suites 1 1 n/a 0 0 00:09:33.618 tests 4 4 4 0 0 00:09:33.618 asserts 152 152 152 0 n/a 00:09:33.618 00:09:33.618 Elapsed time = 0.136 seconds 00:09:33.618 00:09:33.618 real 0m0.159s 00:09:33.618 user 0m0.140s 00:09:33.618 sys 0m0.013s 00:09:33.618 10:53:00 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.618 10:53:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:33.618 ************************************ 00:09:33.618 END TEST env_memory 00:09:33.618 ************************************ 00:09:33.618 10:53:00 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:33.618 10:53:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.618 10:53:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.619 10:53:00 env -- common/autotest_common.sh@10 -- # set +x 00:09:33.619 ************************************ 00:09:33.619 START TEST env_vtophys 00:09:33.619 ************************************ 00:09:33.619 10:53:00 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:33.619 EAL: lib.eal log level changed from notice to debug 00:09:33.619 EAL: Detected lcore 0 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 1 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 2 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 3 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 4 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 5 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 6 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 7 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 8 as core 0 on socket 0 00:09:33.619 EAL: Detected lcore 9 as core 0 on socket 0 00:09:33.619 EAL: Maximum logical cores by configuration: 128 00:09:33.619 EAL: Detected CPU lcores: 10 00:09:33.619 EAL: Detected NUMA nodes: 1 00:09:33.619 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:33.619 EAL: Detected shared linkage of DPDK 00:09:33.619 EAL: No shared files mode enabled, IPC will be disabled 00:09:33.619 EAL: Selected IOVA mode 'PA' 00:09:33.619 EAL: Probing VFIO support... 00:09:33.619 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:33.619 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:33.619 EAL: Ask a virtual area of 0x2e000 bytes 00:09:33.619 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:33.619 EAL: Setting up physically contiguous memory... 00:09:33.619 EAL: Setting maximum number of open files to 524288 00:09:33.619 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:33.619 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:33.619 EAL: Ask a virtual area of 0x61000 bytes 00:09:33.619 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:33.619 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:33.619 EAL: Ask a virtual area of 0x400000000 bytes 00:09:33.619 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:33.619 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:33.619 EAL: Ask a virtual area of 0x61000 bytes 00:09:33.619 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:33.619 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:33.619 EAL: Ask a virtual area of 0x400000000 bytes 00:09:33.619 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:33.619 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:33.619 EAL: Ask a virtual area of 0x61000 bytes 00:09:33.619 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:33.619 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:33.619 EAL: Ask a virtual area of 0x400000000 bytes 00:09:33.619 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:33.619 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:33.619 EAL: Ask a virtual area of 0x61000 bytes 00:09:33.619 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:33.619 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:33.619 EAL: Ask a virtual area of 0x400000000 bytes 00:09:33.619 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:33.619 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:33.619 EAL: Hugepages will be freed exactly as allocated. 00:09:33.619 EAL: No shared files mode enabled, IPC is disabled 00:09:33.619 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: TSC frequency is ~2490000 KHz 00:09:33.878 EAL: Main lcore 0 is ready (tid=7f11a200fa00;cpuset=[0]) 00:09:33.878 EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.878 EAL: Restoring previous memory policy: 0 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was expanded by 2MB 00:09:33.878 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:33.878 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:33.878 EAL: Mem event callback 'spdk:(nil)' registered 00:09:33.878 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:33.878 00:09:33.878 00:09:33.878 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.878 http://cunit.sourceforge.net/ 00:09:33.878 00:09:33.878 00:09:33.878 Suite: components_suite 00:09:33.878 Test: vtophys_malloc_test ...passed 00:09:33.878 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.878 EAL: Restoring previous memory policy: 4 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was expanded by 4MB 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was shrunk by 4MB 00:09:33.878 EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.878 EAL: Restoring previous memory policy: 4 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was expanded by 6MB 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was shrunk by 6MB 00:09:33.878 EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.878 EAL: Restoring previous memory policy: 4 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was expanded by 10MB 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was shrunk by 10MB 00:09:33.878 EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.878 EAL: Restoring previous memory policy: 4 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was expanded by 18MB 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was shrunk by 18MB 00:09:33.878 EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.878 EAL: Restoring previous memory policy: 4 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was expanded by 34MB 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was shrunk by 34MB 00:09:33.878 EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.878 EAL: Restoring previous memory policy: 4 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was expanded by 66MB 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was shrunk by 66MB 00:09:33.878 EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.878 EAL: Restoring previous memory policy: 4 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was expanded by 130MB 00:09:33.878 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.878 EAL: request: mp_malloc_sync 00:09:33.878 EAL: No shared files mode enabled, IPC is disabled 00:09:33.878 EAL: Heap on socket 0 was shrunk by 130MB 00:09:33.878 EAL: Trying to obtain current memory policy. 00:09:33.878 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:34.137 EAL: Restoring previous memory policy: 4 00:09:34.137 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.137 EAL: request: mp_malloc_sync 00:09:34.137 EAL: No shared files mode enabled, IPC is disabled 00:09:34.137 EAL: Heap on socket 0 was expanded by 258MB 00:09:34.137 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.137 EAL: request: mp_malloc_sync 00:09:34.137 EAL: No shared files mode enabled, IPC is disabled 00:09:34.137 EAL: Heap on socket 0 was shrunk by 258MB 00:09:34.137 EAL: Trying to obtain current memory policy. 00:09:34.137 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:34.137 EAL: Restoring previous memory policy: 4 00:09:34.137 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.137 EAL: request: mp_malloc_sync 00:09:34.137 EAL: No shared files mode enabled, IPC is disabled 00:09:34.137 EAL: Heap on socket 0 was expanded by 514MB 00:09:34.396 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.396 EAL: request: mp_malloc_sync 00:09:34.396 EAL: No shared files mode enabled, IPC is disabled 00:09:34.396 EAL: Heap on socket 0 was shrunk by 514MB 00:09:34.396 EAL: Trying to obtain current memory policy. 00:09:34.396 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:34.655 EAL: Restoring previous memory policy: 4 00:09:34.655 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.655 EAL: request: mp_malloc_sync 00:09:34.655 EAL: No shared files mode enabled, IPC is disabled 00:09:34.655 EAL: Heap on socket 0 was expanded by 1026MB 00:09:34.655 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.914 passed 00:09:34.914 00:09:34.914 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.914 suites 1 1 n/a 0 0 00:09:34.914 tests 2 2 2 0 0 00:09:34.914 asserts 5554 5554 5554 0 n/a 00:09:34.914 00:09:34.914 Elapsed time = 0.980 seconds 00:09:34.914 EAL: request: mp_malloc_sync 00:09:34.914 EAL: No shared files mode enabled, IPC is disabled 00:09:34.914 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:34.914 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.914 EAL: request: mp_malloc_sync 00:09:34.914 EAL: No shared files mode enabled, IPC is disabled 00:09:34.914 EAL: Heap on socket 0 was shrunk by 2MB 00:09:34.914 EAL: No shared files mode enabled, IPC is disabled 00:09:34.914 EAL: No shared files mode enabled, IPC is disabled 00:09:34.914 EAL: No shared files mode enabled, IPC is disabled 00:09:34.914 00:09:34.914 real 0m1.198s 00:09:34.914 user 0m0.647s 00:09:34.914 sys 0m0.416s 00:09:34.914 10:53:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.914 10:53:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:34.914 ************************************ 00:09:34.914 END TEST env_vtophys 00:09:34.914 ************************************ 00:09:34.914 10:53:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:34.914 10:53:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.914 10:53:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.914 10:53:01 env -- common/autotest_common.sh@10 -- # set +x 00:09:34.914 ************************************ 00:09:34.914 START TEST env_pci 00:09:34.914 ************************************ 00:09:34.914 10:53:02 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:34.914 00:09:34.914 00:09:34.914 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.914 http://cunit.sourceforge.net/ 00:09:34.914 00:09:34.914 00:09:34.914 Suite: pci 00:09:34.914 Test: pci_hook ...[2024-12-05 10:53:02.022674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56652 has claimed it 00:09:34.914 passed 00:09:34.914 00:09:34.914 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.914 suites 1 1 n/a 0 0 00:09:34.914 tests 1 1 1 0 0 00:09:34.914 asserts 25 25 25 0 n/a 00:09:34.914 00:09:34.914 Elapsed time = 0.003 seconds 00:09:34.914 EAL: Cannot find device (10000:00:01.0) 00:09:34.914 EAL: Failed to attach device on primary process 00:09:34.914 00:09:34.914 real 0m0.031s 00:09:34.914 user 0m0.013s 00:09:34.914 sys 0m0.018s 00:09:34.914 10:53:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.914 10:53:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:34.914 ************************************ 00:09:34.914 END TEST env_pci 00:09:34.914 ************************************ 00:09:35.174 10:53:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:35.174 10:53:02 env -- env/env.sh@15 -- # uname 00:09:35.174 10:53:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:35.174 10:53:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:35.174 10:53:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:35.174 10:53:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:35.174 10:53:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.174 10:53:02 env -- common/autotest_common.sh@10 -- # set +x 00:09:35.174 ************************************ 00:09:35.174 START TEST env_dpdk_post_init 00:09:35.174 ************************************ 00:09:35.174 10:53:02 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:35.174 EAL: Detected CPU lcores: 10 00:09:35.174 EAL: Detected NUMA nodes: 1 00:09:35.174 EAL: Detected shared linkage of DPDK 00:09:35.174 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:35.174 EAL: Selected IOVA mode 'PA' 00:09:35.174 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:35.174 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:35.174 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:35.174 Starting DPDK initialization... 00:09:35.174 Starting SPDK post initialization... 00:09:35.174 SPDK NVMe probe 00:09:35.174 Attaching to 0000:00:10.0 00:09:35.174 Attaching to 0000:00:11.0 00:09:35.174 Attached to 0000:00:10.0 00:09:35.174 Attached to 0000:00:11.0 00:09:35.174 Cleaning up... 00:09:35.174 00:09:35.174 real 0m0.200s 00:09:35.174 user 0m0.064s 00:09:35.174 sys 0m0.037s 00:09:35.174 ************************************ 00:09:35.174 END TEST env_dpdk_post_init 00:09:35.174 ************************************ 00:09:35.174 10:53:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.174 10:53:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:35.432 10:53:02 env -- env/env.sh@26 -- # uname 00:09:35.433 10:53:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:35.433 10:53:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:35.433 10:53:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.433 10:53:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.433 10:53:02 env -- common/autotest_common.sh@10 -- # set +x 00:09:35.433 ************************************ 00:09:35.433 START TEST env_mem_callbacks 00:09:35.433 ************************************ 00:09:35.433 10:53:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:35.433 EAL: Detected CPU lcores: 10 00:09:35.433 EAL: Detected NUMA nodes: 1 00:09:35.433 EAL: Detected shared linkage of DPDK 00:09:35.433 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:35.433 EAL: Selected IOVA mode 'PA' 00:09:35.433 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:35.433 00:09:35.433 00:09:35.433 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.433 http://cunit.sourceforge.net/ 00:09:35.433 00:09:35.433 00:09:35.433 Suite: memory 00:09:35.433 Test: test ... 00:09:35.433 register 0x200000200000 2097152 00:09:35.433 malloc 3145728 00:09:35.433 register 0x200000400000 4194304 00:09:35.433 buf 0x200000500000 len 3145728 PASSED 00:09:35.433 malloc 64 00:09:35.433 buf 0x2000004fff40 len 64 PASSED 00:09:35.433 malloc 4194304 00:09:35.433 register 0x200000800000 6291456 00:09:35.433 buf 0x200000a00000 len 4194304 PASSED 00:09:35.433 free 0x200000500000 3145728 00:09:35.433 free 0x2000004fff40 64 00:09:35.433 unregister 0x200000400000 4194304 PASSED 00:09:35.433 free 0x200000a00000 4194304 00:09:35.433 unregister 0x200000800000 6291456 PASSED 00:09:35.433 malloc 8388608 00:09:35.433 register 0x200000400000 10485760 00:09:35.433 buf 0x200000600000 len 8388608 PASSED 00:09:35.433 free 0x200000600000 8388608 00:09:35.433 unregister 0x200000400000 10485760 PASSED 00:09:35.433 passed 00:09:35.433 00:09:35.433 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.433 suites 1 1 n/a 0 0 00:09:35.433 tests 1 1 1 0 0 00:09:35.433 asserts 15 15 15 0 n/a 00:09:35.433 00:09:35.433 Elapsed time = 0.009 seconds 00:09:35.433 00:09:35.433 real 0m0.159s 00:09:35.433 user 0m0.021s 00:09:35.433 sys 0m0.033s 00:09:35.433 10:53:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.433 10:53:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:35.433 ************************************ 00:09:35.433 END TEST env_mem_callbacks 00:09:35.433 ************************************ 00:09:35.692 ************************************ 00:09:35.692 END TEST env 00:09:35.692 ************************************ 00:09:35.692 00:09:35.692 real 0m2.384s 00:09:35.692 user 0m1.158s 00:09:35.692 sys 0m0.879s 00:09:35.692 10:53:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.692 10:53:02 env -- common/autotest_common.sh@10 -- # set +x 00:09:35.692 10:53:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:35.692 10:53:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.692 10:53:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.692 10:53:02 -- common/autotest_common.sh@10 -- # set +x 00:09:35.692 ************************************ 00:09:35.692 START TEST rpc 00:09:35.692 ************************************ 00:09:35.692 10:53:02 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:35.692 * Looking for test storage... 00:09:35.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:35.692 10:53:02 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:35.692 10:53:02 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:35.692 10:53:02 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:35.951 10:53:02 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:35.951 10:53:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.951 10:53:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.951 10:53:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.951 10:53:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.951 10:53:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.951 10:53:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.951 10:53:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.951 10:53:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.951 10:53:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.951 10:53:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.951 10:53:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.951 10:53:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:35.951 10:53:02 rpc -- scripts/common.sh@345 -- # : 1 00:09:35.951 10:53:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.951 10:53:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.951 10:53:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:35.951 10:53:02 rpc -- scripts/common.sh@353 -- # local d=1 00:09:35.951 10:53:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.951 10:53:02 rpc -- scripts/common.sh@355 -- # echo 1 00:09:35.951 10:53:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.951 10:53:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:35.951 10:53:02 rpc -- scripts/common.sh@353 -- # local d=2 00:09:35.951 10:53:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.951 10:53:02 rpc -- scripts/common.sh@355 -- # echo 2 00:09:35.951 10:53:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.951 10:53:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.951 10:53:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.951 10:53:02 rpc -- scripts/common.sh@368 -- # return 0 00:09:35.951 10:53:02 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.951 10:53:02 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:35.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.951 --rc genhtml_branch_coverage=1 00:09:35.951 --rc genhtml_function_coverage=1 00:09:35.951 --rc genhtml_legend=1 00:09:35.951 --rc geninfo_all_blocks=1 00:09:35.951 --rc geninfo_unexecuted_blocks=1 00:09:35.951 00:09:35.951 ' 00:09:35.951 10:53:02 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:35.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.952 --rc genhtml_branch_coverage=1 00:09:35.952 --rc genhtml_function_coverage=1 00:09:35.952 --rc genhtml_legend=1 00:09:35.952 --rc geninfo_all_blocks=1 00:09:35.952 --rc geninfo_unexecuted_blocks=1 00:09:35.952 00:09:35.952 ' 00:09:35.952 10:53:02 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:35.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.952 --rc genhtml_branch_coverage=1 00:09:35.952 --rc genhtml_function_coverage=1 00:09:35.952 --rc genhtml_legend=1 00:09:35.952 --rc geninfo_all_blocks=1 00:09:35.952 --rc geninfo_unexecuted_blocks=1 00:09:35.952 00:09:35.952 ' 00:09:35.952 10:53:02 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:35.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.952 --rc genhtml_branch_coverage=1 00:09:35.952 --rc genhtml_function_coverage=1 00:09:35.952 --rc genhtml_legend=1 00:09:35.952 --rc geninfo_all_blocks=1 00:09:35.952 --rc geninfo_unexecuted_blocks=1 00:09:35.952 00:09:35.952 ' 00:09:35.952 10:53:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56775 00:09:35.952 10:53:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:35.952 10:53:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:35.952 10:53:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56775 00:09:35.952 10:53:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 56775 ']' 00:09:35.952 10:53:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.952 10:53:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.952 10:53:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.952 10:53:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.952 10:53:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.952 [2024-12-05 10:53:02.962338] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:09:35.952 [2024-12-05 10:53:02.962411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56775 ] 00:09:35.952 [2024-12-05 10:53:03.110961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.212 [2024-12-05 10:53:03.157067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:36.212 [2024-12-05 10:53:03.157115] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56775' to capture a snapshot of events at runtime. 00:09:36.212 [2024-12-05 10:53:03.157125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.212 [2024-12-05 10:53:03.157134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.212 [2024-12-05 10:53:03.157141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56775 for offline analysis/debug. 00:09:36.212 [2024-12-05 10:53:03.157449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.212 [2024-12-05 10:53:03.212242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.777 10:53:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.777 10:53:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:36.777 10:53:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:36.777 10:53:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:36.777 10:53:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:36.777 10:53:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:36.777 10:53:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.777 10:53:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.777 10:53:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.777 ************************************ 00:09:36.777 START TEST rpc_integrity 00:09:36.777 ************************************ 00:09:36.777 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:36.777 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:36.777 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.777 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.777 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.777 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:36.777 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:36.777 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:36.777 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:36.777 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.777 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.035 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.035 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:37.035 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:37.035 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.035 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.035 10:53:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.035 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:37.035 { 00:09:37.035 "name": "Malloc0", 00:09:37.035 "aliases": [ 00:09:37.035 "887b19b2-7069-4d01-bf47-21e9ae129ea3" 00:09:37.035 ], 00:09:37.035 "product_name": "Malloc disk", 00:09:37.035 "block_size": 512, 00:09:37.035 "num_blocks": 16384, 00:09:37.035 "uuid": "887b19b2-7069-4d01-bf47-21e9ae129ea3", 00:09:37.035 "assigned_rate_limits": { 00:09:37.035 "rw_ios_per_sec": 0, 00:09:37.035 "rw_mbytes_per_sec": 0, 00:09:37.035 "r_mbytes_per_sec": 0, 00:09:37.035 "w_mbytes_per_sec": 0 00:09:37.035 }, 00:09:37.035 "claimed": false, 00:09:37.035 "zoned": false, 00:09:37.035 "supported_io_types": { 00:09:37.035 "read": true, 00:09:37.035 "write": true, 00:09:37.035 "unmap": true, 00:09:37.035 "flush": true, 00:09:37.035 "reset": true, 00:09:37.035 "nvme_admin": false, 00:09:37.035 "nvme_io": false, 00:09:37.035 "nvme_io_md": false, 00:09:37.035 "write_zeroes": true, 00:09:37.035 "zcopy": true, 00:09:37.035 "get_zone_info": false, 00:09:37.035 "zone_management": false, 00:09:37.035 "zone_append": false, 00:09:37.035 "compare": false, 00:09:37.035 "compare_and_write": false, 00:09:37.035 "abort": true, 00:09:37.035 "seek_hole": false, 00:09:37.035 "seek_data": false, 00:09:37.035 "copy": true, 00:09:37.035 "nvme_iov_md": false 00:09:37.035 }, 00:09:37.035 "memory_domains": [ 00:09:37.035 { 00:09:37.036 "dma_device_id": "system", 00:09:37.036 "dma_device_type": 1 00:09:37.036 }, 00:09:37.036 { 00:09:37.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.036 "dma_device_type": 2 00:09:37.036 } 00:09:37.036 ], 00:09:37.036 "driver_specific": {} 00:09:37.036 } 00:09:37.036 ]' 00:09:37.036 10:53:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.036 [2024-12-05 10:53:04.017628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:37.036 [2024-12-05 10:53:04.017768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.036 [2024-12-05 10:53:04.017808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd5fcb0 00:09:37.036 [2024-12-05 10:53:04.017820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.036 [2024-12-05 10:53:04.019096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.036 [2024-12-05 10:53:04.019120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:37.036 Passthru0 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:37.036 { 00:09:37.036 "name": "Malloc0", 00:09:37.036 "aliases": [ 00:09:37.036 "887b19b2-7069-4d01-bf47-21e9ae129ea3" 00:09:37.036 ], 00:09:37.036 "product_name": "Malloc disk", 00:09:37.036 "block_size": 512, 00:09:37.036 "num_blocks": 16384, 00:09:37.036 "uuid": "887b19b2-7069-4d01-bf47-21e9ae129ea3", 00:09:37.036 "assigned_rate_limits": { 00:09:37.036 "rw_ios_per_sec": 0, 00:09:37.036 "rw_mbytes_per_sec": 0, 00:09:37.036 "r_mbytes_per_sec": 0, 00:09:37.036 "w_mbytes_per_sec": 0 00:09:37.036 }, 00:09:37.036 "claimed": true, 00:09:37.036 "claim_type": "exclusive_write", 00:09:37.036 "zoned": false, 00:09:37.036 "supported_io_types": { 00:09:37.036 "read": true, 00:09:37.036 "write": true, 00:09:37.036 "unmap": true, 00:09:37.036 "flush": true, 00:09:37.036 "reset": true, 00:09:37.036 "nvme_admin": false, 00:09:37.036 "nvme_io": false, 00:09:37.036 "nvme_io_md": false, 00:09:37.036 "write_zeroes": true, 00:09:37.036 "zcopy": true, 00:09:37.036 "get_zone_info": false, 00:09:37.036 "zone_management": false, 00:09:37.036 "zone_append": false, 00:09:37.036 "compare": false, 00:09:37.036 "compare_and_write": false, 00:09:37.036 "abort": true, 00:09:37.036 "seek_hole": false, 00:09:37.036 "seek_data": false, 00:09:37.036 "copy": true, 00:09:37.036 "nvme_iov_md": false 00:09:37.036 }, 00:09:37.036 "memory_domains": [ 00:09:37.036 { 00:09:37.036 "dma_device_id": "system", 00:09:37.036 "dma_device_type": 1 00:09:37.036 }, 00:09:37.036 { 00:09:37.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.036 "dma_device_type": 2 00:09:37.036 } 00:09:37.036 ], 00:09:37.036 "driver_specific": {} 00:09:37.036 }, 00:09:37.036 { 00:09:37.036 "name": "Passthru0", 00:09:37.036 "aliases": [ 00:09:37.036 "e1d4eeab-2059-5239-83a9-314fa7c26453" 00:09:37.036 ], 00:09:37.036 "product_name": "passthru", 00:09:37.036 "block_size": 512, 00:09:37.036 "num_blocks": 16384, 00:09:37.036 "uuid": "e1d4eeab-2059-5239-83a9-314fa7c26453", 00:09:37.036 "assigned_rate_limits": { 00:09:37.036 "rw_ios_per_sec": 0, 00:09:37.036 "rw_mbytes_per_sec": 0, 00:09:37.036 "r_mbytes_per_sec": 0, 00:09:37.036 "w_mbytes_per_sec": 0 00:09:37.036 }, 00:09:37.036 "claimed": false, 00:09:37.036 "zoned": false, 00:09:37.036 "supported_io_types": { 00:09:37.036 "read": true, 00:09:37.036 "write": true, 00:09:37.036 "unmap": true, 00:09:37.036 "flush": true, 00:09:37.036 "reset": true, 00:09:37.036 "nvme_admin": false, 00:09:37.036 "nvme_io": false, 00:09:37.036 "nvme_io_md": false, 00:09:37.036 "write_zeroes": true, 00:09:37.036 "zcopy": true, 00:09:37.036 "get_zone_info": false, 00:09:37.036 "zone_management": false, 00:09:37.036 "zone_append": false, 00:09:37.036 "compare": false, 00:09:37.036 "compare_and_write": false, 00:09:37.036 "abort": true, 00:09:37.036 "seek_hole": false, 00:09:37.036 "seek_data": false, 00:09:37.036 "copy": true, 00:09:37.036 "nvme_iov_md": false 00:09:37.036 }, 00:09:37.036 "memory_domains": [ 00:09:37.036 { 00:09:37.036 "dma_device_id": "system", 00:09:37.036 "dma_device_type": 1 00:09:37.036 }, 00:09:37.036 { 00:09:37.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.036 "dma_device_type": 2 00:09:37.036 } 00:09:37.036 ], 00:09:37.036 "driver_specific": { 00:09:37.036 "passthru": { 00:09:37.036 "name": "Passthru0", 00:09:37.036 "base_bdev_name": "Malloc0" 00:09:37.036 } 00:09:37.036 } 00:09:37.036 } 00:09:37.036 ]' 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:37.036 ************************************ 00:09:37.036 END TEST rpc_integrity 00:09:37.036 ************************************ 00:09:37.036 10:53:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:37.036 00:09:37.036 real 0m0.319s 00:09:37.036 user 0m0.192s 00:09:37.036 sys 0m0.057s 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.036 10:53:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 10:53:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:37.295 10:53:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.295 10:53:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.295 10:53:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 ************************************ 00:09:37.295 START TEST rpc_plugins 00:09:37.295 ************************************ 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:37.295 { 00:09:37.295 "name": "Malloc1", 00:09:37.295 "aliases": [ 00:09:37.295 "6a587804-0acc-48d6-8005-33665c875260" 00:09:37.295 ], 00:09:37.295 "product_name": "Malloc disk", 00:09:37.295 "block_size": 4096, 00:09:37.295 "num_blocks": 256, 00:09:37.295 "uuid": "6a587804-0acc-48d6-8005-33665c875260", 00:09:37.295 "assigned_rate_limits": { 00:09:37.295 "rw_ios_per_sec": 0, 00:09:37.295 "rw_mbytes_per_sec": 0, 00:09:37.295 "r_mbytes_per_sec": 0, 00:09:37.295 "w_mbytes_per_sec": 0 00:09:37.295 }, 00:09:37.295 "claimed": false, 00:09:37.295 "zoned": false, 00:09:37.295 "supported_io_types": { 00:09:37.295 "read": true, 00:09:37.295 "write": true, 00:09:37.295 "unmap": true, 00:09:37.295 "flush": true, 00:09:37.295 "reset": true, 00:09:37.295 "nvme_admin": false, 00:09:37.295 "nvme_io": false, 00:09:37.295 "nvme_io_md": false, 00:09:37.295 "write_zeroes": true, 00:09:37.295 "zcopy": true, 00:09:37.295 "get_zone_info": false, 00:09:37.295 "zone_management": false, 00:09:37.295 "zone_append": false, 00:09:37.295 "compare": false, 00:09:37.295 "compare_and_write": false, 00:09:37.295 "abort": true, 00:09:37.295 "seek_hole": false, 00:09:37.295 "seek_data": false, 00:09:37.295 "copy": true, 00:09:37.295 "nvme_iov_md": false 00:09:37.295 }, 00:09:37.295 "memory_domains": [ 00:09:37.295 { 00:09:37.295 "dma_device_id": "system", 00:09:37.295 "dma_device_type": 1 00:09:37.295 }, 00:09:37.295 { 00:09:37.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.295 "dma_device_type": 2 00:09:37.295 } 00:09:37.295 ], 00:09:37.295 "driver_specific": {} 00:09:37.295 } 00:09:37.295 ]' 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:37.295 ************************************ 00:09:37.295 END TEST rpc_plugins 00:09:37.295 ************************************ 00:09:37.295 10:53:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:37.295 00:09:37.295 real 0m0.157s 00:09:37.295 user 0m0.095s 00:09:37.295 sys 0m0.025s 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.295 10:53:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.578 10:53:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:37.578 10:53:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.578 10:53:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.578 10:53:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.578 ************************************ 00:09:37.578 START TEST rpc_trace_cmd_test 00:09:37.578 ************************************ 00:09:37.578 10:53:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:37.578 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:37.578 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:37.578 10:53:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.578 10:53:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.578 10:53:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.578 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:37.578 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56775", 00:09:37.578 "tpoint_group_mask": "0x8", 00:09:37.578 "iscsi_conn": { 00:09:37.578 "mask": "0x2", 00:09:37.578 "tpoint_mask": "0x0" 00:09:37.578 }, 00:09:37.578 "scsi": { 00:09:37.578 "mask": "0x4", 00:09:37.578 "tpoint_mask": "0x0" 00:09:37.578 }, 00:09:37.578 "bdev": { 00:09:37.578 "mask": "0x8", 00:09:37.578 "tpoint_mask": "0xffffffffffffffff" 00:09:37.578 }, 00:09:37.578 "nvmf_rdma": { 00:09:37.578 "mask": "0x10", 00:09:37.578 "tpoint_mask": "0x0" 00:09:37.578 }, 00:09:37.578 "nvmf_tcp": { 00:09:37.578 "mask": "0x20", 00:09:37.578 "tpoint_mask": "0x0" 00:09:37.578 }, 00:09:37.579 "ftl": { 00:09:37.579 "mask": "0x40", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "blobfs": { 00:09:37.579 "mask": "0x80", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "dsa": { 00:09:37.579 "mask": "0x200", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "thread": { 00:09:37.579 "mask": "0x400", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "nvme_pcie": { 00:09:37.579 "mask": "0x800", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "iaa": { 00:09:37.579 "mask": "0x1000", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "nvme_tcp": { 00:09:37.579 "mask": "0x2000", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "bdev_nvme": { 00:09:37.579 "mask": "0x4000", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "sock": { 00:09:37.579 "mask": "0x8000", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "blob": { 00:09:37.579 "mask": "0x10000", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "bdev_raid": { 00:09:37.579 "mask": "0x20000", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 }, 00:09:37.579 "scheduler": { 00:09:37.579 "mask": "0x40000", 00:09:37.579 "tpoint_mask": "0x0" 00:09:37.579 } 00:09:37.579 }' 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:37.579 ************************************ 00:09:37.579 END TEST rpc_trace_cmd_test 00:09:37.579 ************************************ 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:37.579 00:09:37.579 real 0m0.237s 00:09:37.579 user 0m0.179s 00:09:37.579 sys 0m0.048s 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.579 10:53:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 10:53:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:37.838 10:53:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:37.838 10:53:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:37.838 10:53:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.838 10:53:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.838 10:53:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 ************************************ 00:09:37.838 START TEST rpc_daemon_integrity 00:09:37.838 ************************************ 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:37.838 { 00:09:37.838 "name": "Malloc2", 00:09:37.838 "aliases": [ 00:09:37.838 "f5635fc4-7c69-4cd3-ba73-fdab53c8e4b1" 00:09:37.838 ], 00:09:37.838 "product_name": "Malloc disk", 00:09:37.838 "block_size": 512, 00:09:37.838 "num_blocks": 16384, 00:09:37.838 "uuid": "f5635fc4-7c69-4cd3-ba73-fdab53c8e4b1", 00:09:37.838 "assigned_rate_limits": { 00:09:37.838 "rw_ios_per_sec": 0, 00:09:37.838 "rw_mbytes_per_sec": 0, 00:09:37.838 "r_mbytes_per_sec": 0, 00:09:37.838 "w_mbytes_per_sec": 0 00:09:37.838 }, 00:09:37.838 "claimed": false, 00:09:37.838 "zoned": false, 00:09:37.838 "supported_io_types": { 00:09:37.838 "read": true, 00:09:37.838 "write": true, 00:09:37.838 "unmap": true, 00:09:37.838 "flush": true, 00:09:37.838 "reset": true, 00:09:37.838 "nvme_admin": false, 00:09:37.838 "nvme_io": false, 00:09:37.838 "nvme_io_md": false, 00:09:37.838 "write_zeroes": true, 00:09:37.838 "zcopy": true, 00:09:37.838 "get_zone_info": false, 00:09:37.838 "zone_management": false, 00:09:37.838 "zone_append": false, 00:09:37.838 "compare": false, 00:09:37.838 "compare_and_write": false, 00:09:37.838 "abort": true, 00:09:37.838 "seek_hole": false, 00:09:37.838 "seek_data": false, 00:09:37.838 "copy": true, 00:09:37.838 "nvme_iov_md": false 00:09:37.838 }, 00:09:37.838 "memory_domains": [ 00:09:37.838 { 00:09:37.838 "dma_device_id": "system", 00:09:37.838 "dma_device_type": 1 00:09:37.838 }, 00:09:37.838 { 00:09:37.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.838 "dma_device_type": 2 00:09:37.838 } 00:09:37.838 ], 00:09:37.838 "driver_specific": {} 00:09:37.838 } 00:09:37.838 ]' 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 [2024-12-05 10:53:04.912411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:37.838 [2024-12-05 10:53:04.912461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.838 [2024-12-05 10:53:04.912478] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xef4020 00:09:37.838 [2024-12-05 10:53:04.912486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.838 [2024-12-05 10:53:04.913881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.838 [2024-12-05 10:53:04.913913] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:37.838 Passthru0 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.838 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:37.838 { 00:09:37.839 "name": "Malloc2", 00:09:37.839 "aliases": [ 00:09:37.839 "f5635fc4-7c69-4cd3-ba73-fdab53c8e4b1" 00:09:37.839 ], 00:09:37.839 "product_name": "Malloc disk", 00:09:37.839 "block_size": 512, 00:09:37.839 "num_blocks": 16384, 00:09:37.839 "uuid": "f5635fc4-7c69-4cd3-ba73-fdab53c8e4b1", 00:09:37.839 "assigned_rate_limits": { 00:09:37.839 "rw_ios_per_sec": 0, 00:09:37.839 "rw_mbytes_per_sec": 0, 00:09:37.839 "r_mbytes_per_sec": 0, 00:09:37.839 "w_mbytes_per_sec": 0 00:09:37.839 }, 00:09:37.839 "claimed": true, 00:09:37.839 "claim_type": "exclusive_write", 00:09:37.839 "zoned": false, 00:09:37.839 "supported_io_types": { 00:09:37.839 "read": true, 00:09:37.839 "write": true, 00:09:37.839 "unmap": true, 00:09:37.839 "flush": true, 00:09:37.839 "reset": true, 00:09:37.839 "nvme_admin": false, 00:09:37.839 "nvme_io": false, 00:09:37.839 "nvme_io_md": false, 00:09:37.839 "write_zeroes": true, 00:09:37.839 "zcopy": true, 00:09:37.839 "get_zone_info": false, 00:09:37.839 "zone_management": false, 00:09:37.839 "zone_append": false, 00:09:37.839 "compare": false, 00:09:37.839 "compare_and_write": false, 00:09:37.839 "abort": true, 00:09:37.839 "seek_hole": false, 00:09:37.839 "seek_data": false, 00:09:37.839 "copy": true, 00:09:37.839 "nvme_iov_md": false 00:09:37.839 }, 00:09:37.839 "memory_domains": [ 00:09:37.839 { 00:09:37.839 "dma_device_id": "system", 00:09:37.839 "dma_device_type": 1 00:09:37.839 }, 00:09:37.839 { 00:09:37.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.839 "dma_device_type": 2 00:09:37.839 } 00:09:37.839 ], 00:09:37.839 "driver_specific": {} 00:09:37.839 }, 00:09:37.839 { 00:09:37.839 "name": "Passthru0", 00:09:37.839 "aliases": [ 00:09:37.839 "8b6a3eda-27d9-547a-a5d9-f70cbff19376" 00:09:37.839 ], 00:09:37.839 "product_name": "passthru", 00:09:37.839 "block_size": 512, 00:09:37.839 "num_blocks": 16384, 00:09:37.839 "uuid": "8b6a3eda-27d9-547a-a5d9-f70cbff19376", 00:09:37.839 "assigned_rate_limits": { 00:09:37.839 "rw_ios_per_sec": 0, 00:09:37.839 "rw_mbytes_per_sec": 0, 00:09:37.839 "r_mbytes_per_sec": 0, 00:09:37.839 "w_mbytes_per_sec": 0 00:09:37.839 }, 00:09:37.839 "claimed": false, 00:09:37.839 "zoned": false, 00:09:37.839 "supported_io_types": { 00:09:37.839 "read": true, 00:09:37.839 "write": true, 00:09:37.839 "unmap": true, 00:09:37.839 "flush": true, 00:09:37.839 "reset": true, 00:09:37.839 "nvme_admin": false, 00:09:37.839 "nvme_io": false, 00:09:37.839 "nvme_io_md": false, 00:09:37.839 "write_zeroes": true, 00:09:37.839 "zcopy": true, 00:09:37.839 "get_zone_info": false, 00:09:37.839 "zone_management": false, 00:09:37.839 "zone_append": false, 00:09:37.839 "compare": false, 00:09:37.839 "compare_and_write": false, 00:09:37.839 "abort": true, 00:09:37.839 "seek_hole": false, 00:09:37.839 "seek_data": false, 00:09:37.839 "copy": true, 00:09:37.839 "nvme_iov_md": false 00:09:37.839 }, 00:09:37.839 "memory_domains": [ 00:09:37.839 { 00:09:37.839 "dma_device_id": "system", 00:09:37.839 "dma_device_type": 1 00:09:37.839 }, 00:09:37.839 { 00:09:37.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.839 "dma_device_type": 2 00:09:37.839 } 00:09:37.839 ], 00:09:37.839 "driver_specific": { 00:09:37.839 "passthru": { 00:09:37.839 "name": "Passthru0", 00:09:37.839 "base_bdev_name": "Malloc2" 00:09:37.839 } 00:09:37.839 } 00:09:37.839 } 00:09:37.839 ]' 00:09:37.839 10:53:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:38.097 ************************************ 00:09:38.097 END TEST rpc_daemon_integrity 00:09:38.097 ************************************ 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:38.097 00:09:38.097 real 0m0.308s 00:09:38.097 user 0m0.186s 00:09:38.097 sys 0m0.058s 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.097 10:53:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.097 10:53:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:38.097 10:53:05 rpc -- rpc/rpc.sh@84 -- # killprocess 56775 00:09:38.097 10:53:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 56775 ']' 00:09:38.097 10:53:05 rpc -- common/autotest_common.sh@958 -- # kill -0 56775 00:09:38.097 10:53:05 rpc -- common/autotest_common.sh@959 -- # uname 00:09:38.097 10:53:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.097 10:53:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56775 00:09:38.097 killing process with pid 56775 00:09:38.098 10:53:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.098 10:53:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.098 10:53:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56775' 00:09:38.098 10:53:05 rpc -- common/autotest_common.sh@973 -- # kill 56775 00:09:38.098 10:53:05 rpc -- common/autotest_common.sh@978 -- # wait 56775 00:09:38.355 00:09:38.355 real 0m2.823s 00:09:38.355 user 0m3.511s 00:09:38.355 sys 0m0.775s 00:09:38.355 10:53:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.355 ************************************ 00:09:38.355 END TEST rpc 00:09:38.355 ************************************ 00:09:38.355 10:53:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.613 10:53:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:38.613 10:53:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.613 10:53:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.613 10:53:05 -- common/autotest_common.sh@10 -- # set +x 00:09:38.613 ************************************ 00:09:38.613 START TEST skip_rpc 00:09:38.613 ************************************ 00:09:38.613 10:53:05 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:38.613 * Looking for test storage... 00:09:38.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:38.613 10:53:05 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.613 10:53:05 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.613 10:53:05 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.872 10:53:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.872 --rc genhtml_branch_coverage=1 00:09:38.872 --rc genhtml_function_coverage=1 00:09:38.872 --rc genhtml_legend=1 00:09:38.872 --rc geninfo_all_blocks=1 00:09:38.872 --rc geninfo_unexecuted_blocks=1 00:09:38.872 00:09:38.872 ' 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.872 --rc genhtml_branch_coverage=1 00:09:38.872 --rc genhtml_function_coverage=1 00:09:38.872 --rc genhtml_legend=1 00:09:38.872 --rc geninfo_all_blocks=1 00:09:38.872 --rc geninfo_unexecuted_blocks=1 00:09:38.872 00:09:38.872 ' 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.872 --rc genhtml_branch_coverage=1 00:09:38.872 --rc genhtml_function_coverage=1 00:09:38.872 --rc genhtml_legend=1 00:09:38.872 --rc geninfo_all_blocks=1 00:09:38.872 --rc geninfo_unexecuted_blocks=1 00:09:38.872 00:09:38.872 ' 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.872 --rc genhtml_branch_coverage=1 00:09:38.872 --rc genhtml_function_coverage=1 00:09:38.872 --rc genhtml_legend=1 00:09:38.872 --rc geninfo_all_blocks=1 00:09:38.872 --rc geninfo_unexecuted_blocks=1 00:09:38.872 00:09:38.872 ' 00:09:38.872 10:53:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:38.872 10:53:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:38.872 10:53:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.872 10:53:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.872 ************************************ 00:09:38.872 START TEST skip_rpc 00:09:38.872 ************************************ 00:09:38.872 10:53:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:38.872 10:53:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56975 00:09:38.872 10:53:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:38.872 10:53:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:38.872 10:53:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:38.872 [2024-12-05 10:53:05.874243] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:09:38.872 [2024-12-05 10:53:05.874326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56975 ] 00:09:38.872 [2024-12-05 10:53:06.024144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.131 [2024-12-05 10:53:06.075584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.131 [2024-12-05 10:53:06.131362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.469 10:53:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:44.469 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56975 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56975 ']' 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56975 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56975 00:09:44.470 killing process with pid 56975 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56975' 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56975 00:09:44.470 10:53:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56975 00:09:44.470 00:09:44.470 real 0m5.372s 00:09:44.470 user 0m5.040s 00:09:44.470 sys 0m0.260s 00:09:44.470 10:53:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.470 10:53:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.470 ************************************ 00:09:44.470 END TEST skip_rpc 00:09:44.470 ************************************ 00:09:44.470 10:53:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:44.470 10:53:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.470 10:53:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.470 10:53:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.470 ************************************ 00:09:44.470 START TEST skip_rpc_with_json 00:09:44.470 ************************************ 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57062 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57062 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57062 ']' 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.470 10:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.470 [2024-12-05 10:53:11.324303] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:09:44.470 [2024-12-05 10:53:11.324821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57062 ] 00:09:44.470 [2024-12-05 10:53:11.479196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.470 [2024-12-05 10:53:11.522421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.470 [2024-12-05 10:53:11.578693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:45.408 [2024-12-05 10:53:12.226904] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:45.408 request: 00:09:45.408 { 00:09:45.408 "trtype": "tcp", 00:09:45.408 "method": "nvmf_get_transports", 00:09:45.408 "req_id": 1 00:09:45.408 } 00:09:45.408 Got JSON-RPC error response 00:09:45.408 response: 00:09:45.408 { 00:09:45.408 "code": -19, 00:09:45.408 "message": "No such device" 00:09:45.408 } 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:45.408 [2024-12-05 10:53:12.246957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.408 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:45.408 { 00:09:45.408 "subsystems": [ 00:09:45.408 { 00:09:45.408 "subsystem": "fsdev", 00:09:45.408 "config": [ 00:09:45.408 { 00:09:45.408 "method": "fsdev_set_opts", 00:09:45.408 "params": { 00:09:45.408 "fsdev_io_pool_size": 65535, 00:09:45.408 "fsdev_io_cache_size": 256 00:09:45.408 } 00:09:45.408 } 00:09:45.408 ] 00:09:45.408 }, 00:09:45.408 { 00:09:45.408 "subsystem": "keyring", 00:09:45.408 "config": [] 00:09:45.408 }, 00:09:45.408 { 00:09:45.408 "subsystem": "iobuf", 00:09:45.408 "config": [ 00:09:45.408 { 00:09:45.408 "method": "iobuf_set_options", 00:09:45.408 "params": { 00:09:45.408 "small_pool_count": 8192, 00:09:45.408 "large_pool_count": 1024, 00:09:45.408 "small_bufsize": 8192, 00:09:45.408 "large_bufsize": 135168, 00:09:45.408 "enable_numa": false 00:09:45.408 } 00:09:45.408 } 00:09:45.408 ] 00:09:45.408 }, 00:09:45.409 { 00:09:45.409 "subsystem": "sock", 00:09:45.409 "config": [ 00:09:45.409 { 00:09:45.409 "method": "sock_set_default_impl", 00:09:45.409 "params": { 00:09:45.409 "impl_name": "uring" 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "sock_impl_set_options", 00:09:45.409 "params": { 00:09:45.409 "impl_name": "ssl", 00:09:45.409 "recv_buf_size": 4096, 00:09:45.409 "send_buf_size": 4096, 00:09:45.409 "enable_recv_pipe": true, 00:09:45.409 "enable_quickack": false, 00:09:45.409 "enable_placement_id": 0, 00:09:45.409 "enable_zerocopy_send_server": true, 00:09:45.409 "enable_zerocopy_send_client": false, 00:09:45.409 "zerocopy_threshold": 0, 00:09:45.409 "tls_version": 0, 00:09:45.409 "enable_ktls": false 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "sock_impl_set_options", 00:09:45.409 "params": { 00:09:45.409 "impl_name": "posix", 00:09:45.409 "recv_buf_size": 2097152, 00:09:45.409 "send_buf_size": 2097152, 00:09:45.409 "enable_recv_pipe": true, 00:09:45.409 "enable_quickack": false, 00:09:45.409 "enable_placement_id": 0, 00:09:45.409 "enable_zerocopy_send_server": true, 00:09:45.409 "enable_zerocopy_send_client": false, 00:09:45.409 "zerocopy_threshold": 0, 00:09:45.409 "tls_version": 0, 00:09:45.409 "enable_ktls": false 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "sock_impl_set_options", 00:09:45.409 "params": { 00:09:45.409 "impl_name": "uring", 00:09:45.409 "recv_buf_size": 2097152, 00:09:45.409 "send_buf_size": 2097152, 00:09:45.409 "enable_recv_pipe": true, 00:09:45.409 "enable_quickack": false, 00:09:45.409 "enable_placement_id": 0, 00:09:45.409 "enable_zerocopy_send_server": false, 00:09:45.409 "enable_zerocopy_send_client": false, 00:09:45.409 "zerocopy_threshold": 0, 00:09:45.409 "tls_version": 0, 00:09:45.409 "enable_ktls": false 00:09:45.409 } 00:09:45.409 } 00:09:45.409 ] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "vmd", 00:09:45.409 "config": [] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "accel", 00:09:45.409 "config": [ 00:09:45.409 { 00:09:45.409 "method": "accel_set_options", 00:09:45.409 "params": { 00:09:45.409 "small_cache_size": 128, 00:09:45.409 "large_cache_size": 16, 00:09:45.409 "task_count": 2048, 00:09:45.409 "sequence_count": 2048, 00:09:45.409 "buf_count": 2048 00:09:45.409 } 00:09:45.409 } 00:09:45.409 ] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "bdev", 00:09:45.409 "config": [ 00:09:45.409 { 00:09:45.409 "method": "bdev_set_options", 00:09:45.409 "params": { 00:09:45.409 "bdev_io_pool_size": 65535, 00:09:45.409 "bdev_io_cache_size": 256, 00:09:45.409 "bdev_auto_examine": true, 00:09:45.409 "iobuf_small_cache_size": 128, 00:09:45.409 "iobuf_large_cache_size": 16 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "bdev_raid_set_options", 00:09:45.409 "params": { 00:09:45.409 "process_window_size_kb": 1024, 00:09:45.409 "process_max_bandwidth_mb_sec": 0 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "bdev_iscsi_set_options", 00:09:45.409 "params": { 00:09:45.409 "timeout_sec": 30 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "bdev_nvme_set_options", 00:09:45.409 "params": { 00:09:45.409 "action_on_timeout": "none", 00:09:45.409 "timeout_us": 0, 00:09:45.409 "timeout_admin_us": 0, 00:09:45.409 "keep_alive_timeout_ms": 10000, 00:09:45.409 "arbitration_burst": 0, 00:09:45.409 "low_priority_weight": 0, 00:09:45.409 "medium_priority_weight": 0, 00:09:45.409 "high_priority_weight": 0, 00:09:45.409 "nvme_adminq_poll_period_us": 10000, 00:09:45.409 "nvme_ioq_poll_period_us": 0, 00:09:45.409 "io_queue_requests": 0, 00:09:45.409 "delay_cmd_submit": true, 00:09:45.409 "transport_retry_count": 4, 00:09:45.409 "bdev_retry_count": 3, 00:09:45.409 "transport_ack_timeout": 0, 00:09:45.409 "ctrlr_loss_timeout_sec": 0, 00:09:45.409 "reconnect_delay_sec": 0, 00:09:45.409 "fast_io_fail_timeout_sec": 0, 00:09:45.409 "disable_auto_failback": false, 00:09:45.409 "generate_uuids": false, 00:09:45.409 "transport_tos": 0, 00:09:45.409 "nvme_error_stat": false, 00:09:45.409 "rdma_srq_size": 0, 00:09:45.409 "io_path_stat": false, 00:09:45.409 "allow_accel_sequence": false, 00:09:45.409 "rdma_max_cq_size": 0, 00:09:45.409 "rdma_cm_event_timeout_ms": 0, 00:09:45.409 "dhchap_digests": [ 00:09:45.409 "sha256", 00:09:45.409 "sha384", 00:09:45.409 "sha512" 00:09:45.409 ], 00:09:45.409 "dhchap_dhgroups": [ 00:09:45.409 "null", 00:09:45.409 "ffdhe2048", 00:09:45.409 "ffdhe3072", 00:09:45.409 "ffdhe4096", 00:09:45.409 "ffdhe6144", 00:09:45.409 "ffdhe8192" 00:09:45.409 ] 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "bdev_nvme_set_hotplug", 00:09:45.409 "params": { 00:09:45.409 "period_us": 100000, 00:09:45.409 "enable": false 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "bdev_wait_for_examine" 00:09:45.409 } 00:09:45.409 ] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "scsi", 00:09:45.409 "config": null 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "scheduler", 00:09:45.409 "config": [ 00:09:45.409 { 00:09:45.409 "method": "framework_set_scheduler", 00:09:45.409 "params": { 00:09:45.409 "name": "static" 00:09:45.409 } 00:09:45.409 } 00:09:45.409 ] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "vhost_scsi", 00:09:45.409 "config": [] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "vhost_blk", 00:09:45.409 "config": [] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "ublk", 00:09:45.409 "config": [] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "nbd", 00:09:45.409 "config": [] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "nvmf", 00:09:45.409 "config": [ 00:09:45.409 { 00:09:45.409 "method": "nvmf_set_config", 00:09:45.409 "params": { 00:09:45.409 "discovery_filter": "match_any", 00:09:45.409 "admin_cmd_passthru": { 00:09:45.409 "identify_ctrlr": false 00:09:45.409 }, 00:09:45.409 "dhchap_digests": [ 00:09:45.409 "sha256", 00:09:45.409 "sha384", 00:09:45.409 "sha512" 00:09:45.409 ], 00:09:45.409 "dhchap_dhgroups": [ 00:09:45.409 "null", 00:09:45.409 "ffdhe2048", 00:09:45.409 "ffdhe3072", 00:09:45.409 "ffdhe4096", 00:09:45.409 "ffdhe6144", 00:09:45.409 "ffdhe8192" 00:09:45.409 ] 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "nvmf_set_max_subsystems", 00:09:45.409 "params": { 00:09:45.409 "max_subsystems": 1024 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "nvmf_set_crdt", 00:09:45.409 "params": { 00:09:45.409 "crdt1": 0, 00:09:45.409 "crdt2": 0, 00:09:45.409 "crdt3": 0 00:09:45.409 } 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "method": "nvmf_create_transport", 00:09:45.409 "params": { 00:09:45.409 "trtype": "TCP", 00:09:45.409 "max_queue_depth": 128, 00:09:45.409 "max_io_qpairs_per_ctrlr": 127, 00:09:45.409 "in_capsule_data_size": 4096, 00:09:45.409 "max_io_size": 131072, 00:09:45.409 "io_unit_size": 131072, 00:09:45.409 "max_aq_depth": 128, 00:09:45.409 "num_shared_buffers": 511, 00:09:45.409 "buf_cache_size": 4294967295, 00:09:45.409 "dif_insert_or_strip": false, 00:09:45.409 "zcopy": false, 00:09:45.409 "c2h_success": true, 00:09:45.409 "sock_priority": 0, 00:09:45.409 "abort_timeout_sec": 1, 00:09:45.409 "ack_timeout": 0, 00:09:45.409 "data_wr_pool_size": 0 00:09:45.409 } 00:09:45.409 } 00:09:45.409 ] 00:09:45.409 }, 00:09:45.409 { 00:09:45.409 "subsystem": "iscsi", 00:09:45.409 "config": [ 00:09:45.409 { 00:09:45.409 "method": "iscsi_set_options", 00:09:45.409 "params": { 00:09:45.409 "node_base": "iqn.2016-06.io.spdk", 00:09:45.409 "max_sessions": 128, 00:09:45.409 "max_connections_per_session": 2, 00:09:45.409 "max_queue_depth": 64, 00:09:45.409 "default_time2wait": 2, 00:09:45.409 "default_time2retain": 20, 00:09:45.409 "first_burst_length": 8192, 00:09:45.409 "immediate_data": true, 00:09:45.409 "allow_duplicated_isid": false, 00:09:45.409 "error_recovery_level": 0, 00:09:45.409 "nop_timeout": 60, 00:09:45.409 "nop_in_interval": 30, 00:09:45.409 "disable_chap": false, 00:09:45.409 "require_chap": false, 00:09:45.409 "mutual_chap": false, 00:09:45.409 "chap_group": 0, 00:09:45.409 "max_large_datain_per_connection": 64, 00:09:45.409 "max_r2t_per_connection": 4, 00:09:45.409 "pdu_pool_size": 36864, 00:09:45.409 "immediate_data_pool_size": 16384, 00:09:45.409 "data_out_pool_size": 2048 00:09:45.409 } 00:09:45.409 } 00:09:45.409 ] 00:09:45.409 } 00:09:45.409 ] 00:09:45.409 } 00:09:45.409 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:45.409 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57062 00:09:45.409 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57062 ']' 00:09:45.409 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57062 00:09:45.410 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:45.410 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.410 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57062 00:09:45.410 killing process with pid 57062 00:09:45.410 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.410 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.410 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57062' 00:09:45.410 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57062 00:09:45.410 10:53:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57062 00:09:45.669 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57084 00:09:45.669 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:45.669 10:53:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57084 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57084 ']' 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57084 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57084 00:09:50.940 killing process with pid 57084 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57084' 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57084 00:09:50.940 10:53:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57084 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:51.199 00:09:51.199 real 0m6.899s 00:09:51.199 user 0m6.657s 00:09:51.199 sys 0m0.603s 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.199 ************************************ 00:09:51.199 END TEST skip_rpc_with_json 00:09:51.199 ************************************ 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:51.199 10:53:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:51.199 10:53:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.199 10:53:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.199 10:53:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.199 ************************************ 00:09:51.199 START TEST skip_rpc_with_delay 00:09:51.199 ************************************ 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:51.199 [2024-12-05 10:53:18.305689] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.199 ************************************ 00:09:51.199 END TEST skip_rpc_with_delay 00:09:51.199 ************************************ 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.199 00:09:51.199 real 0m0.082s 00:09:51.199 user 0m0.046s 00:09:51.199 sys 0m0.032s 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.199 10:53:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:51.556 10:53:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:51.556 10:53:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:51.556 10:53:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:51.556 10:53:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.556 10:53:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.556 10:53:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.556 ************************************ 00:09:51.556 START TEST exit_on_failed_rpc_init 00:09:51.556 ************************************ 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:51.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57199 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57199 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57199 ']' 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.556 10:53:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:51.556 [2024-12-05 10:53:18.457988] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:09:51.556 [2024-12-05 10:53:18.458213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57199 ] 00:09:51.556 [2024-12-05 10:53:18.608287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.556 [2024-12-05 10:53:18.663852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.815 [2024-12-05 10:53:18.720040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:52.399 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:52.399 [2024-12-05 10:53:19.425173] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:09:52.399 [2024-12-05 10:53:19.425245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57211 ] 00:09:52.657 [2024-12-05 10:53:19.575322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.657 [2024-12-05 10:53:19.627933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.657 [2024-12-05 10:53:19.628009] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:52.657 [2024-12-05 10:53:19.628021] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:52.657 [2024-12-05 10:53:19.628029] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57199 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57199 ']' 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57199 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57199 00:09:52.657 killing process with pid 57199 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57199' 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57199 00:09:52.657 10:53:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57199 00:09:52.916 ************************************ 00:09:52.916 END TEST exit_on_failed_rpc_init 00:09:52.916 ************************************ 00:09:52.916 00:09:52.916 real 0m1.647s 00:09:52.916 user 0m1.866s 00:09:52.916 sys 0m0.385s 00:09:52.916 10:53:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.916 10:53:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:53.174 10:53:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:53.174 00:09:53.174 real 0m14.549s 00:09:53.174 user 0m13.816s 00:09:53.174 sys 0m1.614s 00:09:53.174 ************************************ 00:09:53.174 END TEST skip_rpc 00:09:53.174 ************************************ 00:09:53.174 10:53:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.174 10:53:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.174 10:53:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:53.174 10:53:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.174 10:53:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.174 10:53:20 -- common/autotest_common.sh@10 -- # set +x 00:09:53.174 ************************************ 00:09:53.174 START TEST rpc_client 00:09:53.174 ************************************ 00:09:53.174 10:53:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:53.174 * Looking for test storage... 00:09:53.174 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:53.174 10:53:20 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:53.174 10:53:20 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:09:53.174 10:53:20 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:53.432 10:53:20 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:53.432 10:53:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.433 10:53:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:53.433 10:53:20 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.433 10:53:20 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.433 --rc genhtml_branch_coverage=1 00:09:53.433 --rc genhtml_function_coverage=1 00:09:53.433 --rc genhtml_legend=1 00:09:53.433 --rc geninfo_all_blocks=1 00:09:53.433 --rc geninfo_unexecuted_blocks=1 00:09:53.433 00:09:53.433 ' 00:09:53.433 10:53:20 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.433 --rc genhtml_branch_coverage=1 00:09:53.433 --rc genhtml_function_coverage=1 00:09:53.433 --rc genhtml_legend=1 00:09:53.433 --rc geninfo_all_blocks=1 00:09:53.433 --rc geninfo_unexecuted_blocks=1 00:09:53.433 00:09:53.433 ' 00:09:53.433 10:53:20 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.433 --rc genhtml_branch_coverage=1 00:09:53.433 --rc genhtml_function_coverage=1 00:09:53.433 --rc genhtml_legend=1 00:09:53.433 --rc geninfo_all_blocks=1 00:09:53.433 --rc geninfo_unexecuted_blocks=1 00:09:53.433 00:09:53.433 ' 00:09:53.433 10:53:20 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:53.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.433 --rc genhtml_branch_coverage=1 00:09:53.433 --rc genhtml_function_coverage=1 00:09:53.433 --rc genhtml_legend=1 00:09:53.433 --rc geninfo_all_blocks=1 00:09:53.433 --rc geninfo_unexecuted_blocks=1 00:09:53.433 00:09:53.433 ' 00:09:53.433 10:53:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:53.433 OK 00:09:53.433 10:53:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:53.433 00:09:53.433 real 0m0.260s 00:09:53.433 user 0m0.153s 00:09:53.433 sys 0m0.120s 00:09:53.433 10:53:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.433 10:53:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:53.433 ************************************ 00:09:53.433 END TEST rpc_client 00:09:53.433 ************************************ 00:09:53.433 10:53:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:53.433 10:53:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.433 10:53:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.433 10:53:20 -- common/autotest_common.sh@10 -- # set +x 00:09:53.433 ************************************ 00:09:53.433 START TEST json_config 00:09:53.433 ************************************ 00:09:53.433 10:53:20 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:53.433 10:53:20 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:53.433 10:53:20 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:09:53.433 10:53:20 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:53.692 10:53:20 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:53.692 10:53:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.692 10:53:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.692 10:53:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.692 10:53:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.692 10:53:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.692 10:53:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.692 10:53:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.692 10:53:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.692 10:53:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.692 10:53:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.692 10:53:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.692 10:53:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:53.692 10:53:20 json_config -- scripts/common.sh@345 -- # : 1 00:09:53.692 10:53:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.692 10:53:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.692 10:53:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:53.692 10:53:20 json_config -- scripts/common.sh@353 -- # local d=1 00:09:53.692 10:53:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.692 10:53:20 json_config -- scripts/common.sh@355 -- # echo 1 00:09:53.692 10:53:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.692 10:53:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:53.692 10:53:20 json_config -- scripts/common.sh@353 -- # local d=2 00:09:53.692 10:53:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.692 10:53:20 json_config -- scripts/common.sh@355 -- # echo 2 00:09:53.692 10:53:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.692 10:53:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.692 10:53:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.692 10:53:20 json_config -- scripts/common.sh@368 -- # return 0 00:09:53.692 10:53:20 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.692 10:53:20 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:53.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.692 --rc genhtml_branch_coverage=1 00:09:53.692 --rc genhtml_function_coverage=1 00:09:53.692 --rc genhtml_legend=1 00:09:53.692 --rc geninfo_all_blocks=1 00:09:53.692 --rc geninfo_unexecuted_blocks=1 00:09:53.692 00:09:53.692 ' 00:09:53.692 10:53:20 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:53.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.692 --rc genhtml_branch_coverage=1 00:09:53.692 --rc genhtml_function_coverage=1 00:09:53.692 --rc genhtml_legend=1 00:09:53.692 --rc geninfo_all_blocks=1 00:09:53.692 --rc geninfo_unexecuted_blocks=1 00:09:53.692 00:09:53.692 ' 00:09:53.692 10:53:20 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:53.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.692 --rc genhtml_branch_coverage=1 00:09:53.692 --rc genhtml_function_coverage=1 00:09:53.692 --rc genhtml_legend=1 00:09:53.692 --rc geninfo_all_blocks=1 00:09:53.692 --rc geninfo_unexecuted_blocks=1 00:09:53.692 00:09:53.692 ' 00:09:53.692 10:53:20 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:53.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.692 --rc genhtml_branch_coverage=1 00:09:53.692 --rc genhtml_function_coverage=1 00:09:53.692 --rc genhtml_legend=1 00:09:53.692 --rc geninfo_all_blocks=1 00:09:53.692 --rc geninfo_unexecuted_blocks=1 00:09:53.692 00:09:53.692 ' 00:09:53.692 10:53:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.692 10:53:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:53.692 10:53:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.692 10:53:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.692 10:53:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.693 10:53:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.693 10:53:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.693 10:53:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.693 10:53:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.693 10:53:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.693 10:53:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.693 10:53:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.693 10:53:20 json_config -- paths/export.sh@5 -- # export PATH 00:09:53.693 10:53:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:09:53.693 10:53:20 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:53.693 10:53:20 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:53.693 10:53:20 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@50 -- # : 0 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:53.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:53.693 10:53:20 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:53.693 INFO: JSON configuration test init 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.693 10:53:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:53.693 10:53:20 json_config -- json_config/common.sh@9 -- # local app=target 00:09:53.693 10:53:20 json_config -- json_config/common.sh@10 -- # shift 00:09:53.693 10:53:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:53.693 10:53:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:53.693 10:53:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:53.693 10:53:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:53.693 10:53:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:53.693 10:53:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57351 00:09:53.693 10:53:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:53.693 Waiting for target to run... 00:09:53.693 10:53:20 json_config -- json_config/common.sh@25 -- # waitforlisten 57351 /var/tmp/spdk_tgt.sock 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 57351 ']' 00:09:53.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:53.693 10:53:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.693 10:53:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.693 [2024-12-05 10:53:20.808909] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:09:53.693 [2024-12-05 10:53:20.809162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57351 ] 00:09:54.259 [2024-12-05 10:53:21.174008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.259 [2024-12-05 10:53:21.216032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.824 00:09:54.824 10:53:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.824 10:53:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:54.824 10:53:21 json_config -- json_config/common.sh@26 -- # echo '' 00:09:54.824 10:53:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:54.824 10:53:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:54.824 10:53:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.824 10:53:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.824 10:53:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:54.824 10:53:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:54.824 10:53:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.824 10:53:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.824 10:53:21 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:54.824 10:53:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:54.824 10:53:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:54.824 [2024-12-05 10:53:21.977258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:55.081 10:53:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.081 10:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:55.081 10:53:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:55.081 10:53:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:55.339 10:53:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:55.339 10:53:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:55.339 10:53:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@54 -- # sort 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:55.340 10:53:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.340 10:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:55.340 10:53:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.340 10:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:55.340 10:53:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:55.340 10:53:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:55.598 MallocForNvmf0 00:09:55.598 10:53:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:55.598 10:53:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:55.856 MallocForNvmf1 00:09:55.856 10:53:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:55.856 10:53:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:56.114 [2024-12-05 10:53:23.117920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.114 10:53:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.114 10:53:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:56.373 10:53:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:56.373 10:53:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:56.632 10:53:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:56.632 10:53:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:56.632 10:53:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:56.632 10:53:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:56.890 [2024-12-05 10:53:23.953112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:56.890 10:53:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:56.890 10:53:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.890 10:53:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:56.890 10:53:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:56.890 10:53:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.891 10:53:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:57.150 10:53:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:57.150 10:53:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:57.150 10:53:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:57.150 MallocBdevForConfigChangeCheck 00:09:57.150 10:53:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:57.150 10:53:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.150 10:53:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:57.408 10:53:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:57.408 10:53:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:57.668 INFO: shutting down applications... 00:09:57.668 10:53:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:57.668 10:53:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:57.668 10:53:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:57.668 10:53:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:57.668 10:53:24 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:57.927 Calling clear_iscsi_subsystem 00:09:57.927 Calling clear_nvmf_subsystem 00:09:57.927 Calling clear_nbd_subsystem 00:09:57.927 Calling clear_ublk_subsystem 00:09:57.927 Calling clear_vhost_blk_subsystem 00:09:57.927 Calling clear_vhost_scsi_subsystem 00:09:57.927 Calling clear_bdev_subsystem 00:09:57.927 10:53:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:57.927 10:53:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:57.927 10:53:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:57.927 10:53:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:57.927 10:53:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:57.927 10:53:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:58.494 10:53:25 json_config -- json_config/json_config.sh@352 -- # break 00:09:58.494 10:53:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:58.494 10:53:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:58.494 10:53:25 json_config -- json_config/common.sh@31 -- # local app=target 00:09:58.494 10:53:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:58.494 10:53:25 json_config -- json_config/common.sh@35 -- # [[ -n 57351 ]] 00:09:58.494 10:53:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57351 00:09:58.494 10:53:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:58.494 10:53:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.494 10:53:25 json_config -- json_config/common.sh@41 -- # kill -0 57351 00:09:58.494 10:53:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:58.753 10:53:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:58.753 10:53:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:58.753 10:53:25 json_config -- json_config/common.sh@41 -- # kill -0 57351 00:09:58.753 10:53:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:58.753 10:53:25 json_config -- json_config/common.sh@43 -- # break 00:09:58.753 10:53:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:58.753 SPDK target shutdown done 00:09:58.753 10:53:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:58.753 INFO: relaunching applications... 00:09:58.753 10:53:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:58.753 10:53:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:58.753 10:53:25 json_config -- json_config/common.sh@9 -- # local app=target 00:09:58.753 10:53:25 json_config -- json_config/common.sh@10 -- # shift 00:09:58.753 10:53:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:58.753 10:53:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:59.011 10:53:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:59.011 10:53:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:59.011 10:53:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:59.011 Waiting for target to run... 00:09:59.011 10:53:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57541 00:09:59.011 10:53:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:59.011 10:53:25 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.011 10:53:25 json_config -- json_config/common.sh@25 -- # waitforlisten 57541 /var/tmp/spdk_tgt.sock 00:09:59.011 10:53:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 57541 ']' 00:09:59.011 10:53:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:59.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:59.011 10:53:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.011 10:53:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:59.011 10:53:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.011 10:53:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 [2024-12-05 10:53:25.974392] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:09:59.011 [2024-12-05 10:53:25.974625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57541 ] 00:09:59.270 [2024-12-05 10:53:26.342290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.270 [2024-12-05 10:53:26.384027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.529 [2024-12-05 10:53:26.520294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:59.788 [2024-12-05 10:53:26.733532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.788 [2024-12-05 10:53:26.765583] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:59.788 00:09:59.788 10:53:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.788 10:53:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:59.788 10:53:26 json_config -- json_config/common.sh@26 -- # echo '' 00:09:59.788 10:53:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:59.788 10:53:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:59.788 INFO: Checking if target configuration is the same... 00:09:59.788 10:53:26 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.788 10:53:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:59.788 10:53:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:59.788 + '[' 2 -ne 2 ']' 00:09:59.788 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:59.788 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:59.788 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:59.788 +++ basename /dev/fd/62 00:09:59.788 ++ mktemp /tmp/62.XXX 00:09:59.788 + tmp_file_1=/tmp/62.nsQ 00:09:59.788 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:59.788 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:59.788 + tmp_file_2=/tmp/spdk_tgt_config.json.dSB 00:09:59.788 + ret=0 00:09:59.788 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:00.356 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:00.356 + diff -u /tmp/62.nsQ /tmp/spdk_tgt_config.json.dSB 00:10:00.356 INFO: JSON config files are the same 00:10:00.356 + echo 'INFO: JSON config files are the same' 00:10:00.356 + rm /tmp/62.nsQ /tmp/spdk_tgt_config.json.dSB 00:10:00.356 + exit 0 00:10:00.356 10:53:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:10:00.356 INFO: changing configuration and checking if this can be detected... 00:10:00.356 10:53:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:00.356 10:53:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:00.356 10:53:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:00.356 10:53:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:10:00.357 10:53:27 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:00.357 10:53:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:00.357 + '[' 2 -ne 2 ']' 00:10:00.357 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:00.357 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:00.357 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:00.357 +++ basename /dev/fd/62 00:10:00.357 ++ mktemp /tmp/62.XXX 00:10:00.616 + tmp_file_1=/tmp/62.fDy 00:10:00.616 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:00.616 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:00.616 + tmp_file_2=/tmp/spdk_tgt_config.json.cYy 00:10:00.616 + ret=0 00:10:00.616 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:00.876 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:00.876 + diff -u /tmp/62.fDy /tmp/spdk_tgt_config.json.cYy 00:10:00.876 + ret=1 00:10:00.876 + echo '=== Start of file: /tmp/62.fDy ===' 00:10:00.876 + cat /tmp/62.fDy 00:10:00.876 + echo '=== End of file: /tmp/62.fDy ===' 00:10:00.876 + echo '' 00:10:00.876 + echo '=== Start of file: /tmp/spdk_tgt_config.json.cYy ===' 00:10:00.876 + cat /tmp/spdk_tgt_config.json.cYy 00:10:00.876 + echo '=== End of file: /tmp/spdk_tgt_config.json.cYy ===' 00:10:00.876 + echo '' 00:10:00.876 + rm /tmp/62.fDy /tmp/spdk_tgt_config.json.cYy 00:10:00.876 + exit 1 00:10:00.876 INFO: configuration change detected. 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:10:00.876 10:53:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.876 10:53:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@324 -- # [[ -n 57541 ]] 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:10:00.876 10:53:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.876 10:53:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@200 -- # uname -s 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:10:00.876 10:53:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:10:00.876 10:53:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.876 10:53:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:00.876 10:53:28 json_config -- json_config/json_config.sh@330 -- # killprocess 57541 00:10:00.876 10:53:28 json_config -- common/autotest_common.sh@954 -- # '[' -z 57541 ']' 00:10:00.876 10:53:28 json_config -- common/autotest_common.sh@958 -- # kill -0 57541 00:10:00.876 10:53:28 json_config -- common/autotest_common.sh@959 -- # uname 00:10:00.876 10:53:28 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.876 10:53:28 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57541 00:10:01.135 killing process with pid 57541 00:10:01.135 10:53:28 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.135 10:53:28 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.135 10:53:28 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57541' 00:10:01.135 10:53:28 json_config -- common/autotest_common.sh@973 -- # kill 57541 00:10:01.135 10:53:28 json_config -- common/autotest_common.sh@978 -- # wait 57541 00:10:01.135 10:53:28 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:01.135 10:53:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:10:01.135 10:53:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.135 10:53:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.394 10:53:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:10:01.394 10:53:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:10:01.394 INFO: Success 00:10:01.394 ************************************ 00:10:01.394 END TEST json_config 00:10:01.394 ************************************ 00:10:01.394 00:10:01.394 real 0m7.818s 00:10:01.394 user 0m10.602s 00:10:01.394 sys 0m1.894s 00:10:01.394 10:53:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.394 10:53:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.394 10:53:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:01.394 10:53:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.394 10:53:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.394 10:53:28 -- common/autotest_common.sh@10 -- # set +x 00:10:01.394 ************************************ 00:10:01.394 START TEST json_config_extra_key 00:10:01.394 ************************************ 00:10:01.394 10:53:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:01.394 10:53:28 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:01.394 10:53:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:10:01.394 10:53:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:01.654 10:53:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.654 10:53:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:01.654 10:53:28 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.654 10:53:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:01.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.654 --rc genhtml_branch_coverage=1 00:10:01.654 --rc genhtml_function_coverage=1 00:10:01.654 --rc genhtml_legend=1 00:10:01.654 --rc geninfo_all_blocks=1 00:10:01.654 --rc geninfo_unexecuted_blocks=1 00:10:01.654 00:10:01.654 ' 00:10:01.654 10:53:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:01.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.654 --rc genhtml_branch_coverage=1 00:10:01.654 --rc genhtml_function_coverage=1 00:10:01.654 --rc genhtml_legend=1 00:10:01.654 --rc geninfo_all_blocks=1 00:10:01.654 --rc geninfo_unexecuted_blocks=1 00:10:01.654 00:10:01.654 ' 00:10:01.654 10:53:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:01.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.655 --rc genhtml_branch_coverage=1 00:10:01.655 --rc genhtml_function_coverage=1 00:10:01.655 --rc genhtml_legend=1 00:10:01.655 --rc geninfo_all_blocks=1 00:10:01.655 --rc geninfo_unexecuted_blocks=1 00:10:01.655 00:10:01.655 ' 00:10:01.655 10:53:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:01.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.655 --rc genhtml_branch_coverage=1 00:10:01.655 --rc genhtml_function_coverage=1 00:10:01.655 --rc genhtml_legend=1 00:10:01.655 --rc geninfo_all_blocks=1 00:10:01.655 --rc geninfo_unexecuted_blocks=1 00:10:01.655 00:10:01.655 ' 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.655 10:53:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.655 10:53:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.655 10:53:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.655 10:53:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.655 10:53:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.655 10:53:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.655 10:53:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.655 10:53:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:01.655 10:53:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:01.655 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:01.655 10:53:28 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:01.655 INFO: launching applications... 00:10:01.655 10:53:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57689 00:10:01.655 Waiting for target to run... 00:10:01.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57689 /var/tmp/spdk_tgt.sock 00:10:01.655 10:53:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57689 ']' 00:10:01.655 10:53:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:01.655 10:53:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:01.655 10:53:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.655 10:53:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:01.655 10:53:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.655 10:53:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:01.655 [2024-12-05 10:53:28.697063] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:01.655 [2024-12-05 10:53:28.697259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57689 ] 00:10:01.986 [2024-12-05 10:53:29.068057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.986 [2024-12-05 10:53:29.110608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.245 [2024-12-05 10:53:29.141476] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:02.504 00:10:02.504 INFO: shutting down applications... 00:10:02.504 10:53:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.504 10:53:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:02.504 10:53:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:02.504 10:53:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57689 ]] 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57689 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57689 00:10:02.504 10:53:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:03.072 10:53:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:03.072 10:53:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:03.072 10:53:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57689 00:10:03.072 10:53:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:03.072 10:53:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:03.072 10:53:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:03.072 10:53:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:03.072 SPDK target shutdown done 00:10:03.072 Success 00:10:03.072 10:53:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:03.072 ************************************ 00:10:03.072 END TEST json_config_extra_key 00:10:03.072 ************************************ 00:10:03.072 00:10:03.072 real 0m1.689s 00:10:03.072 user 0m1.400s 00:10:03.072 sys 0m0.456s 00:10:03.072 10:53:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.072 10:53:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:03.072 10:53:30 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:03.072 10:53:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.072 10:53:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.072 10:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:03.072 ************************************ 00:10:03.072 START TEST alias_rpc 00:10:03.072 ************************************ 00:10:03.072 10:53:30 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:03.331 * Looking for test storage... 00:10:03.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.331 10:53:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:03.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.331 --rc genhtml_branch_coverage=1 00:10:03.331 --rc genhtml_function_coverage=1 00:10:03.331 --rc genhtml_legend=1 00:10:03.331 --rc geninfo_all_blocks=1 00:10:03.331 --rc geninfo_unexecuted_blocks=1 00:10:03.331 00:10:03.331 ' 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:03.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.331 --rc genhtml_branch_coverage=1 00:10:03.331 --rc genhtml_function_coverage=1 00:10:03.331 --rc genhtml_legend=1 00:10:03.331 --rc geninfo_all_blocks=1 00:10:03.331 --rc geninfo_unexecuted_blocks=1 00:10:03.331 00:10:03.331 ' 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:03.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.331 --rc genhtml_branch_coverage=1 00:10:03.331 --rc genhtml_function_coverage=1 00:10:03.331 --rc genhtml_legend=1 00:10:03.331 --rc geninfo_all_blocks=1 00:10:03.331 --rc geninfo_unexecuted_blocks=1 00:10:03.331 00:10:03.331 ' 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:03.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.331 --rc genhtml_branch_coverage=1 00:10:03.331 --rc genhtml_function_coverage=1 00:10:03.331 --rc genhtml_legend=1 00:10:03.331 --rc geninfo_all_blocks=1 00:10:03.331 --rc geninfo_unexecuted_blocks=1 00:10:03.331 00:10:03.331 ' 00:10:03.331 10:53:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:03.331 10:53:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:03.331 10:53:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57762 00:10:03.331 10:53:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57762 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57762 ']' 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.331 10:53:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.331 [2024-12-05 10:53:30.440379] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:03.331 [2024-12-05 10:53:30.440471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57762 ] 00:10:03.589 [2024-12-05 10:53:30.592399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.589 [2024-12-05 10:53:30.647613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.589 [2024-12-05 10:53:30.707756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:04.157 10:53:31 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.157 10:53:31 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:04.157 10:53:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:04.417 10:53:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57762 00:10:04.417 10:53:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57762 ']' 00:10:04.417 10:53:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57762 00:10:04.417 10:53:31 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:04.417 10:53:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.417 10:53:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57762 00:10:04.676 killing process with pid 57762 00:10:04.676 10:53:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.676 10:53:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.676 10:53:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57762' 00:10:04.676 10:53:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 57762 00:10:04.676 10:53:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 57762 00:10:04.934 ************************************ 00:10:04.934 END TEST alias_rpc 00:10:04.934 ************************************ 00:10:04.934 00:10:04.934 real 0m1.780s 00:10:04.934 user 0m1.886s 00:10:04.934 sys 0m0.468s 00:10:04.934 10:53:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.934 10:53:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.934 10:53:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:04.934 10:53:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:04.934 10:53:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.934 10:53:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.934 10:53:31 -- common/autotest_common.sh@10 -- # set +x 00:10:04.934 ************************************ 00:10:04.934 START TEST spdkcli_tcp 00:10:04.934 ************************************ 00:10:04.934 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:05.192 * Looking for test storage... 00:10:05.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.192 10:53:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.192 --rc genhtml_branch_coverage=1 00:10:05.192 --rc genhtml_function_coverage=1 00:10:05.192 --rc genhtml_legend=1 00:10:05.192 --rc geninfo_all_blocks=1 00:10:05.192 --rc geninfo_unexecuted_blocks=1 00:10:05.192 00:10:05.192 ' 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.192 --rc genhtml_branch_coverage=1 00:10:05.192 --rc genhtml_function_coverage=1 00:10:05.192 --rc genhtml_legend=1 00:10:05.192 --rc geninfo_all_blocks=1 00:10:05.192 --rc geninfo_unexecuted_blocks=1 00:10:05.192 00:10:05.192 ' 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.192 --rc genhtml_branch_coverage=1 00:10:05.192 --rc genhtml_function_coverage=1 00:10:05.192 --rc genhtml_legend=1 00:10:05.192 --rc geninfo_all_blocks=1 00:10:05.192 --rc geninfo_unexecuted_blocks=1 00:10:05.192 00:10:05.192 ' 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.192 --rc genhtml_branch_coverage=1 00:10:05.192 --rc genhtml_function_coverage=1 00:10:05.192 --rc genhtml_legend=1 00:10:05.192 --rc geninfo_all_blocks=1 00:10:05.192 --rc geninfo_unexecuted_blocks=1 00:10:05.192 00:10:05.192 ' 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57846 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:05.192 10:53:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57846 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57846 ']' 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.192 10:53:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.192 [2024-12-05 10:53:32.299890] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:05.192 [2024-12-05 10:53:32.299967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57846 ] 00:10:05.451 [2024-12-05 10:53:32.453388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:05.451 [2024-12-05 10:53:32.507022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.451 [2024-12-05 10:53:32.507024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.451 [2024-12-05 10:53:32.566373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.387 10:53:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.387 10:53:33 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:06.387 10:53:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57863 00:10:06.387 10:53:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:06.387 10:53:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:06.387 [ 00:10:06.387 "bdev_malloc_delete", 00:10:06.387 "bdev_malloc_create", 00:10:06.387 "bdev_null_resize", 00:10:06.387 "bdev_null_delete", 00:10:06.387 "bdev_null_create", 00:10:06.387 "bdev_nvme_cuse_unregister", 00:10:06.387 "bdev_nvme_cuse_register", 00:10:06.387 "bdev_opal_new_user", 00:10:06.387 "bdev_opal_set_lock_state", 00:10:06.387 "bdev_opal_delete", 00:10:06.387 "bdev_opal_get_info", 00:10:06.387 "bdev_opal_create", 00:10:06.387 "bdev_nvme_opal_revert", 00:10:06.387 "bdev_nvme_opal_init", 00:10:06.387 "bdev_nvme_send_cmd", 00:10:06.387 "bdev_nvme_set_keys", 00:10:06.387 "bdev_nvme_get_path_iostat", 00:10:06.387 "bdev_nvme_get_mdns_discovery_info", 00:10:06.387 "bdev_nvme_stop_mdns_discovery", 00:10:06.387 "bdev_nvme_start_mdns_discovery", 00:10:06.387 "bdev_nvme_set_multipath_policy", 00:10:06.387 "bdev_nvme_set_preferred_path", 00:10:06.387 "bdev_nvme_get_io_paths", 00:10:06.387 "bdev_nvme_remove_error_injection", 00:10:06.387 "bdev_nvme_add_error_injection", 00:10:06.387 "bdev_nvme_get_discovery_info", 00:10:06.387 "bdev_nvme_stop_discovery", 00:10:06.387 "bdev_nvme_start_discovery", 00:10:06.387 "bdev_nvme_get_controller_health_info", 00:10:06.387 "bdev_nvme_disable_controller", 00:10:06.387 "bdev_nvme_enable_controller", 00:10:06.387 "bdev_nvme_reset_controller", 00:10:06.387 "bdev_nvme_get_transport_statistics", 00:10:06.387 "bdev_nvme_apply_firmware", 00:10:06.387 "bdev_nvme_detach_controller", 00:10:06.387 "bdev_nvme_get_controllers", 00:10:06.387 "bdev_nvme_attach_controller", 00:10:06.387 "bdev_nvme_set_hotplug", 00:10:06.387 "bdev_nvme_set_options", 00:10:06.387 "bdev_passthru_delete", 00:10:06.387 "bdev_passthru_create", 00:10:06.388 "bdev_lvol_set_parent_bdev", 00:10:06.388 "bdev_lvol_set_parent", 00:10:06.388 "bdev_lvol_check_shallow_copy", 00:10:06.388 "bdev_lvol_start_shallow_copy", 00:10:06.388 "bdev_lvol_grow_lvstore", 00:10:06.388 "bdev_lvol_get_lvols", 00:10:06.388 "bdev_lvol_get_lvstores", 00:10:06.388 "bdev_lvol_delete", 00:10:06.388 "bdev_lvol_set_read_only", 00:10:06.388 "bdev_lvol_resize", 00:10:06.388 "bdev_lvol_decouple_parent", 00:10:06.388 "bdev_lvol_inflate", 00:10:06.388 "bdev_lvol_rename", 00:10:06.388 "bdev_lvol_clone_bdev", 00:10:06.388 "bdev_lvol_clone", 00:10:06.388 "bdev_lvol_snapshot", 00:10:06.388 "bdev_lvol_create", 00:10:06.388 "bdev_lvol_delete_lvstore", 00:10:06.388 "bdev_lvol_rename_lvstore", 00:10:06.388 "bdev_lvol_create_lvstore", 00:10:06.388 "bdev_raid_set_options", 00:10:06.388 "bdev_raid_remove_base_bdev", 00:10:06.388 "bdev_raid_add_base_bdev", 00:10:06.388 "bdev_raid_delete", 00:10:06.388 "bdev_raid_create", 00:10:06.388 "bdev_raid_get_bdevs", 00:10:06.388 "bdev_error_inject_error", 00:10:06.388 "bdev_error_delete", 00:10:06.388 "bdev_error_create", 00:10:06.388 "bdev_split_delete", 00:10:06.388 "bdev_split_create", 00:10:06.388 "bdev_delay_delete", 00:10:06.388 "bdev_delay_create", 00:10:06.388 "bdev_delay_update_latency", 00:10:06.388 "bdev_zone_block_delete", 00:10:06.388 "bdev_zone_block_create", 00:10:06.388 "blobfs_create", 00:10:06.388 "blobfs_detect", 00:10:06.388 "blobfs_set_cache_size", 00:10:06.388 "bdev_aio_delete", 00:10:06.388 "bdev_aio_rescan", 00:10:06.388 "bdev_aio_create", 00:10:06.388 "bdev_ftl_set_property", 00:10:06.388 "bdev_ftl_get_properties", 00:10:06.388 "bdev_ftl_get_stats", 00:10:06.388 "bdev_ftl_unmap", 00:10:06.388 "bdev_ftl_unload", 00:10:06.388 "bdev_ftl_delete", 00:10:06.388 "bdev_ftl_load", 00:10:06.388 "bdev_ftl_create", 00:10:06.388 "bdev_virtio_attach_controller", 00:10:06.388 "bdev_virtio_scsi_get_devices", 00:10:06.388 "bdev_virtio_detach_controller", 00:10:06.388 "bdev_virtio_blk_set_hotplug", 00:10:06.388 "bdev_iscsi_delete", 00:10:06.388 "bdev_iscsi_create", 00:10:06.388 "bdev_iscsi_set_options", 00:10:06.388 "bdev_uring_delete", 00:10:06.388 "bdev_uring_rescan", 00:10:06.388 "bdev_uring_create", 00:10:06.388 "accel_error_inject_error", 00:10:06.388 "ioat_scan_accel_module", 00:10:06.388 "dsa_scan_accel_module", 00:10:06.388 "iaa_scan_accel_module", 00:10:06.388 "keyring_file_remove_key", 00:10:06.388 "keyring_file_add_key", 00:10:06.388 "keyring_linux_set_options", 00:10:06.388 "fsdev_aio_delete", 00:10:06.388 "fsdev_aio_create", 00:10:06.388 "iscsi_get_histogram", 00:10:06.388 "iscsi_enable_histogram", 00:10:06.388 "iscsi_set_options", 00:10:06.388 "iscsi_get_auth_groups", 00:10:06.388 "iscsi_auth_group_remove_secret", 00:10:06.388 "iscsi_auth_group_add_secret", 00:10:06.388 "iscsi_delete_auth_group", 00:10:06.388 "iscsi_create_auth_group", 00:10:06.388 "iscsi_set_discovery_auth", 00:10:06.388 "iscsi_get_options", 00:10:06.388 "iscsi_target_node_request_logout", 00:10:06.388 "iscsi_target_node_set_redirect", 00:10:06.388 "iscsi_target_node_set_auth", 00:10:06.388 "iscsi_target_node_add_lun", 00:10:06.388 "iscsi_get_stats", 00:10:06.388 "iscsi_get_connections", 00:10:06.388 "iscsi_portal_group_set_auth", 00:10:06.388 "iscsi_start_portal_group", 00:10:06.388 "iscsi_delete_portal_group", 00:10:06.388 "iscsi_create_portal_group", 00:10:06.388 "iscsi_get_portal_groups", 00:10:06.388 "iscsi_delete_target_node", 00:10:06.388 "iscsi_target_node_remove_pg_ig_maps", 00:10:06.388 "iscsi_target_node_add_pg_ig_maps", 00:10:06.388 "iscsi_create_target_node", 00:10:06.388 "iscsi_get_target_nodes", 00:10:06.388 "iscsi_delete_initiator_group", 00:10:06.388 "iscsi_initiator_group_remove_initiators", 00:10:06.388 "iscsi_initiator_group_add_initiators", 00:10:06.388 "iscsi_create_initiator_group", 00:10:06.388 "iscsi_get_initiator_groups", 00:10:06.388 "nvmf_set_crdt", 00:10:06.388 "nvmf_set_config", 00:10:06.388 "nvmf_set_max_subsystems", 00:10:06.388 "nvmf_stop_mdns_prr", 00:10:06.388 "nvmf_publish_mdns_prr", 00:10:06.388 "nvmf_subsystem_get_listeners", 00:10:06.388 "nvmf_subsystem_get_qpairs", 00:10:06.388 "nvmf_subsystem_get_controllers", 00:10:06.388 "nvmf_get_stats", 00:10:06.388 "nvmf_get_transports", 00:10:06.388 "nvmf_create_transport", 00:10:06.388 "nvmf_get_targets", 00:10:06.388 "nvmf_delete_target", 00:10:06.388 "nvmf_create_target", 00:10:06.388 "nvmf_subsystem_allow_any_host", 00:10:06.388 "nvmf_subsystem_set_keys", 00:10:06.388 "nvmf_subsystem_remove_host", 00:10:06.388 "nvmf_subsystem_add_host", 00:10:06.388 "nvmf_ns_remove_host", 00:10:06.388 "nvmf_ns_add_host", 00:10:06.388 "nvmf_subsystem_remove_ns", 00:10:06.388 "nvmf_subsystem_set_ns_ana_group", 00:10:06.388 "nvmf_subsystem_add_ns", 00:10:06.388 "nvmf_subsystem_listener_set_ana_state", 00:10:06.388 "nvmf_discovery_get_referrals", 00:10:06.388 "nvmf_discovery_remove_referral", 00:10:06.388 "nvmf_discovery_add_referral", 00:10:06.388 "nvmf_subsystem_remove_listener", 00:10:06.388 "nvmf_subsystem_add_listener", 00:10:06.388 "nvmf_delete_subsystem", 00:10:06.388 "nvmf_create_subsystem", 00:10:06.388 "nvmf_get_subsystems", 00:10:06.388 "env_dpdk_get_mem_stats", 00:10:06.388 "nbd_get_disks", 00:10:06.388 "nbd_stop_disk", 00:10:06.388 "nbd_start_disk", 00:10:06.388 "ublk_recover_disk", 00:10:06.388 "ublk_get_disks", 00:10:06.388 "ublk_stop_disk", 00:10:06.388 "ublk_start_disk", 00:10:06.388 "ublk_destroy_target", 00:10:06.388 "ublk_create_target", 00:10:06.388 "virtio_blk_create_transport", 00:10:06.388 "virtio_blk_get_transports", 00:10:06.388 "vhost_controller_set_coalescing", 00:10:06.388 "vhost_get_controllers", 00:10:06.388 "vhost_delete_controller", 00:10:06.388 "vhost_create_blk_controller", 00:10:06.388 "vhost_scsi_controller_remove_target", 00:10:06.388 "vhost_scsi_controller_add_target", 00:10:06.388 "vhost_start_scsi_controller", 00:10:06.388 "vhost_create_scsi_controller", 00:10:06.388 "thread_set_cpumask", 00:10:06.388 "scheduler_set_options", 00:10:06.388 "framework_get_governor", 00:10:06.388 "framework_get_scheduler", 00:10:06.388 "framework_set_scheduler", 00:10:06.388 "framework_get_reactors", 00:10:06.388 "thread_get_io_channels", 00:10:06.388 "thread_get_pollers", 00:10:06.388 "thread_get_stats", 00:10:06.388 "framework_monitor_context_switch", 00:10:06.388 "spdk_kill_instance", 00:10:06.388 "log_enable_timestamps", 00:10:06.388 "log_get_flags", 00:10:06.388 "log_clear_flag", 00:10:06.388 "log_set_flag", 00:10:06.388 "log_get_level", 00:10:06.388 "log_set_level", 00:10:06.388 "log_get_print_level", 00:10:06.388 "log_set_print_level", 00:10:06.388 "framework_enable_cpumask_locks", 00:10:06.388 "framework_disable_cpumask_locks", 00:10:06.388 "framework_wait_init", 00:10:06.388 "framework_start_init", 00:10:06.388 "scsi_get_devices", 00:10:06.388 "bdev_get_histogram", 00:10:06.388 "bdev_enable_histogram", 00:10:06.388 "bdev_set_qos_limit", 00:10:06.388 "bdev_set_qd_sampling_period", 00:10:06.388 "bdev_get_bdevs", 00:10:06.388 "bdev_reset_iostat", 00:10:06.388 "bdev_get_iostat", 00:10:06.388 "bdev_examine", 00:10:06.388 "bdev_wait_for_examine", 00:10:06.388 "bdev_set_options", 00:10:06.388 "accel_get_stats", 00:10:06.388 "accel_set_options", 00:10:06.388 "accel_set_driver", 00:10:06.388 "accel_crypto_key_destroy", 00:10:06.388 "accel_crypto_keys_get", 00:10:06.388 "accel_crypto_key_create", 00:10:06.388 "accel_assign_opc", 00:10:06.388 "accel_get_module_info", 00:10:06.388 "accel_get_opc_assignments", 00:10:06.388 "vmd_rescan", 00:10:06.388 "vmd_remove_device", 00:10:06.388 "vmd_enable", 00:10:06.388 "sock_get_default_impl", 00:10:06.388 "sock_set_default_impl", 00:10:06.388 "sock_impl_set_options", 00:10:06.388 "sock_impl_get_options", 00:10:06.388 "iobuf_get_stats", 00:10:06.388 "iobuf_set_options", 00:10:06.388 "keyring_get_keys", 00:10:06.388 "framework_get_pci_devices", 00:10:06.388 "framework_get_config", 00:10:06.388 "framework_get_subsystems", 00:10:06.388 "fsdev_set_opts", 00:10:06.388 "fsdev_get_opts", 00:10:06.388 "trace_get_info", 00:10:06.388 "trace_get_tpoint_group_mask", 00:10:06.388 "trace_disable_tpoint_group", 00:10:06.388 "trace_enable_tpoint_group", 00:10:06.388 "trace_clear_tpoint_mask", 00:10:06.388 "trace_set_tpoint_mask", 00:10:06.388 "notify_get_notifications", 00:10:06.388 "notify_get_types", 00:10:06.388 "spdk_get_version", 00:10:06.388 "rpc_get_methods" 00:10:06.388 ] 00:10:06.388 10:53:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.388 10:53:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:06.388 10:53:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57846 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57846 ']' 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57846 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57846 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57846' 00:10:06.388 killing process with pid 57846 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57846 00:10:06.388 10:53:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57846 00:10:07.011 00:10:07.011 real 0m1.824s 00:10:07.011 user 0m3.209s 00:10:07.011 sys 0m0.514s 00:10:07.011 10:53:33 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.011 ************************************ 00:10:07.011 END TEST spdkcli_tcp 00:10:07.011 ************************************ 00:10:07.011 10:53:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.011 10:53:33 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:07.011 10:53:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.011 10:53:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.011 10:53:33 -- common/autotest_common.sh@10 -- # set +x 00:10:07.011 ************************************ 00:10:07.011 START TEST dpdk_mem_utility 00:10:07.011 ************************************ 00:10:07.011 10:53:33 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:07.011 * Looking for test storage... 00:10:07.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:07.011 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.011 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.011 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:07.011 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.011 10:53:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:07.011 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.011 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:07.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.012 --rc genhtml_branch_coverage=1 00:10:07.012 --rc genhtml_function_coverage=1 00:10:07.012 --rc genhtml_legend=1 00:10:07.012 --rc geninfo_all_blocks=1 00:10:07.012 --rc geninfo_unexecuted_blocks=1 00:10:07.012 00:10:07.012 ' 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:07.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.012 --rc genhtml_branch_coverage=1 00:10:07.012 --rc genhtml_function_coverage=1 00:10:07.012 --rc genhtml_legend=1 00:10:07.012 --rc geninfo_all_blocks=1 00:10:07.012 --rc geninfo_unexecuted_blocks=1 00:10:07.012 00:10:07.012 ' 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:07.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.012 --rc genhtml_branch_coverage=1 00:10:07.012 --rc genhtml_function_coverage=1 00:10:07.012 --rc genhtml_legend=1 00:10:07.012 --rc geninfo_all_blocks=1 00:10:07.012 --rc geninfo_unexecuted_blocks=1 00:10:07.012 00:10:07.012 ' 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:07.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.012 --rc genhtml_branch_coverage=1 00:10:07.012 --rc genhtml_function_coverage=1 00:10:07.012 --rc genhtml_legend=1 00:10:07.012 --rc geninfo_all_blocks=1 00:10:07.012 --rc geninfo_unexecuted_blocks=1 00:10:07.012 00:10:07.012 ' 00:10:07.012 10:53:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:07.012 10:53:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57945 00:10:07.012 10:53:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:07.012 10:53:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57945 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57945 ']' 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.012 10:53:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:07.270 [2024-12-05 10:53:34.203718] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:07.270 [2024-12-05 10:53:34.203956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57945 ] 00:10:07.270 [2024-12-05 10:53:34.355654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.270 [2024-12-05 10:53:34.402646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.529 [2024-12-05 10:53:34.457792] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.107 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.107 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:08.107 10:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:08.107 10:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:08.107 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.107 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:08.107 { 00:10:08.107 "filename": "/tmp/spdk_mem_dump.txt" 00:10:08.107 } 00:10:08.107 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.107 10:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:08.107 DPDK memory size 818.000000 MiB in 1 heap(s) 00:10:08.107 1 heaps totaling size 818.000000 MiB 00:10:08.107 size: 818.000000 MiB heap id: 0 00:10:08.107 end heaps---------- 00:10:08.107 9 mempools totaling size 603.782043 MiB 00:10:08.107 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:08.107 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:08.107 size: 100.555481 MiB name: bdev_io_57945 00:10:08.107 size: 50.003479 MiB name: msgpool_57945 00:10:08.107 size: 36.509338 MiB name: fsdev_io_57945 00:10:08.107 size: 21.763794 MiB name: PDU_Pool 00:10:08.107 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:08.107 size: 4.133484 MiB name: evtpool_57945 00:10:08.107 size: 0.026123 MiB name: Session_Pool 00:10:08.107 end mempools------- 00:10:08.107 6 memzones totaling size 4.142822 MiB 00:10:08.107 size: 1.000366 MiB name: RG_ring_0_57945 00:10:08.107 size: 1.000366 MiB name: RG_ring_1_57945 00:10:08.107 size: 1.000366 MiB name: RG_ring_4_57945 00:10:08.107 size: 1.000366 MiB name: RG_ring_5_57945 00:10:08.107 size: 0.125366 MiB name: RG_ring_2_57945 00:10:08.107 size: 0.015991 MiB name: RG_ring_3_57945 00:10:08.107 end memzones------- 00:10:08.108 10:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:08.108 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:10:08.108 list of free elements. size: 10.803223 MiB 00:10:08.108 element at address: 0x200019200000 with size: 0.999878 MiB 00:10:08.108 element at address: 0x200019400000 with size: 0.999878 MiB 00:10:08.108 element at address: 0x200032000000 with size: 0.994446 MiB 00:10:08.108 element at address: 0x200000400000 with size: 0.993958 MiB 00:10:08.108 element at address: 0x200006400000 with size: 0.959839 MiB 00:10:08.108 element at address: 0x200012c00000 with size: 0.944275 MiB 00:10:08.108 element at address: 0x200019600000 with size: 0.936584 MiB 00:10:08.108 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:08.108 element at address: 0x20001ae00000 with size: 0.567871 MiB 00:10:08.108 element at address: 0x20000a600000 with size: 0.488892 MiB 00:10:08.108 element at address: 0x200000c00000 with size: 0.486267 MiB 00:10:08.108 element at address: 0x200019800000 with size: 0.485657 MiB 00:10:08.108 element at address: 0x200003e00000 with size: 0.480286 MiB 00:10:08.108 element at address: 0x200028200000 with size: 0.396301 MiB 00:10:08.108 element at address: 0x200000800000 with size: 0.351746 MiB 00:10:08.108 list of standard malloc elements. size: 199.267883 MiB 00:10:08.108 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:10:08.108 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:10:08.108 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:08.108 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:10:08.108 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:10:08.108 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:08.108 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:10:08.108 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:08.108 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:10:08.108 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000085e580 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087e840 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087e900 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f080 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f140 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f200 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f380 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f440 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f500 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000087f680 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200003efb980 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:10:08.108 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:10:08.109 element at address: 0x200028265740 with size: 0.000183 MiB 00:10:08.109 element at address: 0x200028265800 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826c400 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826c600 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826c780 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826c840 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826c900 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d080 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d140 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d200 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d380 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d440 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d500 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d680 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d740 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d800 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826d980 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826da40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826db00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826de00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826df80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e040 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e100 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e280 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e340 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e400 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e580 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e640 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e700 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e880 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826e940 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f000 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f180 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f240 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f300 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f480 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f540 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f600 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f780 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f840 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f900 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:10:08.109 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:10:08.109 list of memzone associated elements. size: 607.928894 MiB 00:10:08.109 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:10:08.109 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:08.109 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:10:08.109 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:08.109 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:10:08.109 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57945_0 00:10:08.109 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:08.109 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57945_0 00:10:08.109 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:10:08.109 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57945_0 00:10:08.109 element at address: 0x2000199be940 with size: 20.255554 MiB 00:10:08.109 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:08.109 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:10:08.109 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:08.109 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:08.109 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57945_0 00:10:08.109 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:08.109 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57945 00:10:08.109 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:08.109 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57945 00:10:08.109 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:10:08.109 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:08.109 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:10:08.109 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:08.109 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:10:08.109 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:08.109 element at address: 0x200003efba40 with size: 1.008118 MiB 00:10:08.109 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:08.109 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:08.109 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57945 00:10:08.109 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:08.109 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57945 00:10:08.109 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:10:08.109 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57945 00:10:08.109 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:10:08.109 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57945 00:10:08.109 element at address: 0x20000087f740 with size: 0.500488 MiB 00:10:08.109 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57945 00:10:08.109 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:08.109 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57945 00:10:08.109 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:10:08.109 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:08.109 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:10:08.109 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:08.109 element at address: 0x20001987c540 with size: 0.250488 MiB 00:10:08.109 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:08.109 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:08.109 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57945 00:10:08.109 element at address: 0x20000085e640 with size: 0.125488 MiB 00:10:08.109 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57945 00:10:08.109 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:10:08.109 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:08.109 element at address: 0x2000282658c0 with size: 0.023743 MiB 00:10:08.109 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:08.109 element at address: 0x20000085a380 with size: 0.016113 MiB 00:10:08.109 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57945 00:10:08.109 element at address: 0x20002826ba00 with size: 0.002441 MiB 00:10:08.109 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:08.109 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:10:08.109 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57945 00:10:08.109 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:10:08.109 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57945 00:10:08.109 element at address: 0x20000085a180 with size: 0.000305 MiB 00:10:08.109 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57945 00:10:08.109 element at address: 0x20002826c4c0 with size: 0.000305 MiB 00:10:08.109 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:08.109 10:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:08.109 10:53:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57945 00:10:08.109 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57945 ']' 00:10:08.109 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57945 00:10:08.109 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:08.109 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.109 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57945 00:10:08.375 killing process with pid 57945 00:10:08.375 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.375 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.375 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57945' 00:10:08.375 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57945 00:10:08.375 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57945 00:10:08.671 00:10:08.671 real 0m1.718s 00:10:08.671 user 0m1.792s 00:10:08.671 sys 0m0.453s 00:10:08.671 ************************************ 00:10:08.671 END TEST dpdk_mem_utility 00:10:08.671 ************************************ 00:10:08.671 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.671 10:53:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:08.671 10:53:35 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:08.671 10:53:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.671 10:53:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.671 10:53:35 -- common/autotest_common.sh@10 -- # set +x 00:10:08.671 ************************************ 00:10:08.671 START TEST event 00:10:08.671 ************************************ 00:10:08.671 10:53:35 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:08.671 * Looking for test storage... 00:10:08.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:08.671 10:53:35 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:08.671 10:53:35 event -- common/autotest_common.sh@1711 -- # lcov --version 00:10:08.671 10:53:35 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:08.928 10:53:35 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:08.928 10:53:35 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.928 10:53:35 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.928 10:53:35 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.928 10:53:35 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.928 10:53:35 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.928 10:53:35 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.928 10:53:35 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.928 10:53:35 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.928 10:53:35 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.928 10:53:35 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.928 10:53:35 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.928 10:53:35 event -- scripts/common.sh@344 -- # case "$op" in 00:10:08.928 10:53:35 event -- scripts/common.sh@345 -- # : 1 00:10:08.928 10:53:35 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.928 10:53:35 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.928 10:53:35 event -- scripts/common.sh@365 -- # decimal 1 00:10:08.928 10:53:35 event -- scripts/common.sh@353 -- # local d=1 00:10:08.928 10:53:35 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.928 10:53:35 event -- scripts/common.sh@355 -- # echo 1 00:10:08.928 10:53:35 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.928 10:53:35 event -- scripts/common.sh@366 -- # decimal 2 00:10:08.928 10:53:35 event -- scripts/common.sh@353 -- # local d=2 00:10:08.928 10:53:35 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.928 10:53:35 event -- scripts/common.sh@355 -- # echo 2 00:10:08.928 10:53:35 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.928 10:53:35 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.928 10:53:35 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.928 10:53:35 event -- scripts/common.sh@368 -- # return 0 00:10:08.928 10:53:35 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.928 10:53:35 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:08.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.928 --rc genhtml_branch_coverage=1 00:10:08.928 --rc genhtml_function_coverage=1 00:10:08.928 --rc genhtml_legend=1 00:10:08.928 --rc geninfo_all_blocks=1 00:10:08.928 --rc geninfo_unexecuted_blocks=1 00:10:08.928 00:10:08.928 ' 00:10:08.928 10:53:35 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:08.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.928 --rc genhtml_branch_coverage=1 00:10:08.928 --rc genhtml_function_coverage=1 00:10:08.928 --rc genhtml_legend=1 00:10:08.928 --rc geninfo_all_blocks=1 00:10:08.928 --rc geninfo_unexecuted_blocks=1 00:10:08.928 00:10:08.928 ' 00:10:08.928 10:53:35 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:08.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.928 --rc genhtml_branch_coverage=1 00:10:08.928 --rc genhtml_function_coverage=1 00:10:08.928 --rc genhtml_legend=1 00:10:08.928 --rc geninfo_all_blocks=1 00:10:08.928 --rc geninfo_unexecuted_blocks=1 00:10:08.928 00:10:08.928 ' 00:10:08.928 10:53:35 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:08.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.928 --rc genhtml_branch_coverage=1 00:10:08.928 --rc genhtml_function_coverage=1 00:10:08.928 --rc genhtml_legend=1 00:10:08.928 --rc geninfo_all_blocks=1 00:10:08.928 --rc geninfo_unexecuted_blocks=1 00:10:08.928 00:10:08.928 ' 00:10:08.928 10:53:35 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:08.928 10:53:35 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:08.928 10:53:35 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:08.928 10:53:35 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:08.928 10:53:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.928 10:53:35 event -- common/autotest_common.sh@10 -- # set +x 00:10:08.928 ************************************ 00:10:08.928 START TEST event_perf 00:10:08.928 ************************************ 00:10:08.928 10:53:35 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:08.928 Running I/O for 1 seconds...[2024-12-05 10:53:35.952980] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:08.928 [2024-12-05 10:53:35.953183] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58029 ] 00:10:09.186 [2024-12-05 10:53:36.113682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.186 [2024-12-05 10:53:36.166551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.186 [2024-12-05 10:53:36.166596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.186 Running I/O for 1 seconds...[2024-12-05 10:53:36.166780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.186 [2024-12-05 10:53:36.166781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.120 00:10:10.120 lcore 0: 201265 00:10:10.120 lcore 1: 201265 00:10:10.120 lcore 2: 201266 00:10:10.120 lcore 3: 201265 00:10:10.120 done. 00:10:10.120 00:10:10.120 real 0m1.295s 00:10:10.120 user 0m4.110s 00:10:10.120 sys 0m0.058s 00:10:10.120 10:53:37 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.120 10:53:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:10.120 ************************************ 00:10:10.120 END TEST event_perf 00:10:10.120 ************************************ 00:10:10.120 10:53:37 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:10.120 10:53:37 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:10.120 10:53:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.120 10:53:37 event -- common/autotest_common.sh@10 -- # set +x 00:10:10.378 ************************************ 00:10:10.378 START TEST event_reactor 00:10:10.378 ************************************ 00:10:10.378 10:53:37 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:10.378 [2024-12-05 10:53:37.315252] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:10.378 [2024-12-05 10:53:37.315379] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58063 ] 00:10:10.378 [2024-12-05 10:53:37.472990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.378 [2024-12-05 10:53:37.524022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.757 test_start 00:10:11.757 oneshot 00:10:11.757 tick 100 00:10:11.757 tick 100 00:10:11.757 tick 250 00:10:11.757 tick 100 00:10:11.757 tick 100 00:10:11.757 tick 100 00:10:11.757 tick 250 00:10:11.757 tick 500 00:10:11.757 tick 100 00:10:11.757 tick 100 00:10:11.757 tick 250 00:10:11.757 tick 100 00:10:11.757 tick 100 00:10:11.757 test_end 00:10:11.757 ************************************ 00:10:11.757 END TEST event_reactor 00:10:11.757 ************************************ 00:10:11.757 00:10:11.757 real 0m1.280s 00:10:11.757 user 0m1.127s 00:10:11.757 sys 0m0.047s 00:10:11.757 10:53:38 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.757 10:53:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:11.757 10:53:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:11.757 10:53:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:11.757 10:53:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.757 10:53:38 event -- common/autotest_common.sh@10 -- # set +x 00:10:11.757 ************************************ 00:10:11.757 START TEST event_reactor_perf 00:10:11.757 ************************************ 00:10:11.757 10:53:38 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:11.757 [2024-12-05 10:53:38.665962] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:11.757 [2024-12-05 10:53:38.666213] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58093 ] 00:10:11.757 [2024-12-05 10:53:38.811653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.757 [2024-12-05 10:53:38.863143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.137 test_start 00:10:13.137 test_end 00:10:13.137 Performance: 481565 events per second 00:10:13.137 00:10:13.137 real 0m1.265s 00:10:13.137 user 0m1.124s 00:10:13.137 sys 0m0.036s 00:10:13.137 10:53:39 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.137 ************************************ 00:10:13.137 END TEST event_reactor_perf 00:10:13.137 ************************************ 00:10:13.137 10:53:39 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:13.137 10:53:39 event -- event/event.sh@49 -- # uname -s 00:10:13.137 10:53:39 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:13.137 10:53:39 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:13.137 10:53:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.137 10:53:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.137 10:53:39 event -- common/autotest_common.sh@10 -- # set +x 00:10:13.137 ************************************ 00:10:13.137 START TEST event_scheduler 00:10:13.137 ************************************ 00:10:13.137 10:53:39 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:13.137 * Looking for test storage... 00:10:13.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.137 10:53:40 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.137 --rc genhtml_branch_coverage=1 00:10:13.137 --rc genhtml_function_coverage=1 00:10:13.137 --rc genhtml_legend=1 00:10:13.137 --rc geninfo_all_blocks=1 00:10:13.137 --rc geninfo_unexecuted_blocks=1 00:10:13.137 00:10:13.137 ' 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.137 --rc genhtml_branch_coverage=1 00:10:13.137 --rc genhtml_function_coverage=1 00:10:13.137 --rc genhtml_legend=1 00:10:13.137 --rc geninfo_all_blocks=1 00:10:13.137 --rc geninfo_unexecuted_blocks=1 00:10:13.137 00:10:13.137 ' 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.137 --rc genhtml_branch_coverage=1 00:10:13.137 --rc genhtml_function_coverage=1 00:10:13.137 --rc genhtml_legend=1 00:10:13.137 --rc geninfo_all_blocks=1 00:10:13.137 --rc geninfo_unexecuted_blocks=1 00:10:13.137 00:10:13.137 ' 00:10:13.137 10:53:40 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.137 --rc genhtml_branch_coverage=1 00:10:13.137 --rc genhtml_function_coverage=1 00:10:13.137 --rc genhtml_legend=1 00:10:13.137 --rc geninfo_all_blocks=1 00:10:13.137 --rc geninfo_unexecuted_blocks=1 00:10:13.138 00:10:13.138 ' 00:10:13.138 10:53:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:13.138 10:53:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58168 00:10:13.138 10:53:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:13.138 10:53:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:13.138 10:53:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58168 00:10:13.138 10:53:40 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58168 ']' 00:10:13.138 10:53:40 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.138 10:53:40 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.138 10:53:40 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.138 10:53:40 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.138 10:53:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:13.138 [2024-12-05 10:53:40.260595] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:13.138 [2024-12-05 10:53:40.260680] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58168 ] 00:10:13.395 [2024-12-05 10:53:40.414070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.395 [2024-12-05 10:53:40.485700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.395 [2024-12-05 10:53:40.485804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.395 [2024-12-05 10:53:40.486035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.395 [2024-12-05 10:53:40.486036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:14.331 10:53:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:14.331 POWER: Cannot set governor of lcore 0 to userspace 00:10:14.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:14.331 POWER: Cannot set governor of lcore 0 to performance 00:10:14.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:14.331 POWER: Cannot set governor of lcore 0 to userspace 00:10:14.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:14.331 POWER: Cannot set governor of lcore 0 to userspace 00:10:14.331 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:14.331 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:14.331 POWER: Unable to set Power Management Environment for lcore 0 00:10:14.331 [2024-12-05 10:53:41.187918] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:10:14.331 [2024-12-05 10:53:41.187931] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:10:14.331 [2024-12-05 10:53:41.187942] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:14.331 [2024-12-05 10:53:41.187955] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:14.331 [2024-12-05 10:53:41.187962] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:14.331 [2024-12-05 10:53:41.187969] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 [2024-12-05 10:53:41.241178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.331 [2024-12-05 10:53:41.273564] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 ************************************ 00:10:14.331 START TEST scheduler_create_thread 00:10:14.331 ************************************ 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 2 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 3 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 4 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 5 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 6 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 7 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.331 8 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.331 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:14.332 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.332 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.332 9 00:10:14.332 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.332 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:14.332 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.332 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.899 10 00:10:14.899 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.899 10:53:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:14.899 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.899 10:53:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:16.278 10:53:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.278 10:53:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:16.278 10:53:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:16.278 10:53:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.278 10:53:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:16.845 10:53:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.845 10:53:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:16.845 10:53:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.846 10:53:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.781 10:53:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.781 10:53:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:17.781 10:53:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:17.781 10:53:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.781 10:53:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.350 ************************************ 00:10:18.350 END TEST scheduler_create_thread 00:10:18.350 ************************************ 00:10:18.350 10:53:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.350 00:10:18.350 real 0m4.210s 00:10:18.350 user 0m0.023s 00:10:18.350 sys 0m0.011s 00:10:18.350 10:53:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.350 10:53:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.607 10:53:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:18.607 10:53:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58168 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58168 ']' 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58168 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58168 00:10:18.607 killing process with pid 58168 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58168' 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58168 00:10:18.607 10:53:45 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58168 00:10:18.865 [2024-12-05 10:53:45.779036] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:18.865 00:10:18.865 real 0m6.034s 00:10:18.865 user 0m13.202s 00:10:18.865 sys 0m0.472s 00:10:18.865 ************************************ 00:10:18.865 END TEST event_scheduler 00:10:18.865 ************************************ 00:10:18.865 10:53:46 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.865 10:53:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:19.124 10:53:46 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:19.124 10:53:46 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:19.124 10:53:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.124 10:53:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.124 10:53:46 event -- common/autotest_common.sh@10 -- # set +x 00:10:19.124 ************************************ 00:10:19.124 START TEST app_repeat 00:10:19.124 ************************************ 00:10:19.124 10:53:46 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58273 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58273' 00:10:19.124 Process app_repeat pid: 58273 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:19.124 spdk_app_start Round 0 00:10:19.124 10:53:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58273 /var/tmp/spdk-nbd.sock 00:10:19.124 10:53:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58273 ']' 00:10:19.124 10:53:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:19.124 10:53:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:19.124 10:53:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:19.124 10:53:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.124 10:53:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:19.124 [2024-12-05 10:53:46.132168] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:19.124 [2024-12-05 10:53:46.132236] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58273 ] 00:10:19.124 [2024-12-05 10:53:46.282955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:19.383 [2024-12-05 10:53:46.333746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.383 [2024-12-05 10:53:46.333749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.383 [2024-12-05 10:53:46.375936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:19.950 10:53:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.950 10:53:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:19.950 10:53:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:20.208 Malloc0 00:10:20.208 10:53:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:20.467 Malloc1 00:10:20.467 10:53:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:20.467 10:53:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:20.727 /dev/nbd0 00:10:20.727 10:53:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:20.727 10:53:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:20.727 1+0 records in 00:10:20.727 1+0 records out 00:10:20.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356199 s, 11.5 MB/s 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:20.727 10:53:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:20.727 10:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:20.727 10:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:20.727 10:53:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:20.986 /dev/nbd1 00:10:20.986 10:53:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:20.986 10:53:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:20.986 1+0 records in 00:10:20.986 1+0 records out 00:10:20.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311372 s, 13.2 MB/s 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:20.986 10:53:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:20.986 10:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:20.986 10:53:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:20.986 10:53:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:20.986 10:53:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.986 10:53:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:21.246 { 00:10:21.246 "nbd_device": "/dev/nbd0", 00:10:21.246 "bdev_name": "Malloc0" 00:10:21.246 }, 00:10:21.246 { 00:10:21.246 "nbd_device": "/dev/nbd1", 00:10:21.246 "bdev_name": "Malloc1" 00:10:21.246 } 00:10:21.246 ]' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:21.246 { 00:10:21.246 "nbd_device": "/dev/nbd0", 00:10:21.246 "bdev_name": "Malloc0" 00:10:21.246 }, 00:10:21.246 { 00:10:21.246 "nbd_device": "/dev/nbd1", 00:10:21.246 "bdev_name": "Malloc1" 00:10:21.246 } 00:10:21.246 ]' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:21.246 /dev/nbd1' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:21.246 /dev/nbd1' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:21.246 256+0 records in 00:10:21.246 256+0 records out 00:10:21.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011792 s, 88.9 MB/s 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:21.246 256+0 records in 00:10:21.246 256+0 records out 00:10:21.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253609 s, 41.3 MB/s 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:21.246 256+0 records in 00:10:21.246 256+0 records out 00:10:21.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282628 s, 37.1 MB/s 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.246 10:53:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:21.505 10:53:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.763 10:53:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:22.021 10:53:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:22.021 10:53:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:22.280 10:53:49 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:22.539 [2024-12-05 10:53:49.480111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:22.539 [2024-12-05 10:53:49.526111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.539 [2024-12-05 10:53:49.526114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.539 [2024-12-05 10:53:49.569240] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:22.539 [2024-12-05 10:53:49.569323] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:22.539 [2024-12-05 10:53:49.569334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:25.822 10:53:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:25.822 spdk_app_start Round 1 00:10:25.822 10:53:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:25.822 10:53:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58273 /var/tmp/spdk-nbd.sock 00:10:25.822 10:53:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58273 ']' 00:10:25.822 10:53:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:25.822 10:53:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:25.822 10:53:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:25.822 10:53:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.822 10:53:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:25.822 10:53:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.822 10:53:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:25.822 10:53:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:25.822 Malloc0 00:10:25.822 10:53:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:26.081 Malloc1 00:10:26.081 10:53:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:26.081 10:53:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.081 10:53:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:26.081 10:53:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:26.081 10:53:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:26.081 10:53:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:26.081 10:53:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:26.081 10:53:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:26.082 /dev/nbd0 00:10:26.082 10:53:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:26.341 10:53:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:26.341 1+0 records in 00:10:26.341 1+0 records out 00:10:26.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214808 s, 19.1 MB/s 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:26.341 10:53:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:26.341 10:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:26.341 10:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:26.341 10:53:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:26.341 /dev/nbd1 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:26.601 1+0 records in 00:10:26.601 1+0 records out 00:10:26.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355449 s, 11.5 MB/s 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:26.601 10:53:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:26.601 { 00:10:26.601 "nbd_device": "/dev/nbd0", 00:10:26.601 "bdev_name": "Malloc0" 00:10:26.601 }, 00:10:26.601 { 00:10:26.601 "nbd_device": "/dev/nbd1", 00:10:26.601 "bdev_name": "Malloc1" 00:10:26.601 } 00:10:26.601 ]' 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:26.601 10:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:26.601 { 00:10:26.601 "nbd_device": "/dev/nbd0", 00:10:26.601 "bdev_name": "Malloc0" 00:10:26.601 }, 00:10:26.601 { 00:10:26.601 "nbd_device": "/dev/nbd1", 00:10:26.601 "bdev_name": "Malloc1" 00:10:26.601 } 00:10:26.601 ]' 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:26.861 /dev/nbd1' 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:26.861 /dev/nbd1' 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:26.861 256+0 records in 00:10:26.861 256+0 records out 00:10:26.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011531 s, 90.9 MB/s 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:26.861 256+0 records in 00:10:26.861 256+0 records out 00:10:26.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253561 s, 41.4 MB/s 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:26.861 256+0 records in 00:10:26.861 256+0 records out 00:10:26.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265815 s, 39.4 MB/s 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.861 10:53:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.120 10:53:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.379 10:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:27.641 10:53:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:27.641 10:53:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:27.902 10:53:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:27.902 [2024-12-05 10:53:55.041320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:28.159 [2024-12-05 10:53:55.084197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.159 [2024-12-05 10:53:55.084198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.159 [2024-12-05 10:53:55.127075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.160 [2024-12-05 10:53:55.127151] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:28.160 [2024-12-05 10:53:55.127162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:31.448 10:53:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:31.448 spdk_app_start Round 2 00:10:31.448 10:53:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:31.448 10:53:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58273 /var/tmp/spdk-nbd.sock 00:10:31.448 10:53:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58273 ']' 00:10:31.448 10:53:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:31.448 10:53:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:31.448 10:53:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:31.448 10:53:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.448 10:53:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:31.448 10:53:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.448 10:53:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:31.448 10:53:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:31.448 Malloc0 00:10:31.448 10:53:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:31.448 Malloc1 00:10:31.448 10:53:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:31.448 10:53:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:31.708 /dev/nbd0 00:10:31.708 10:53:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:31.708 10:53:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:31.708 1+0 records in 00:10:31.708 1+0 records out 00:10:31.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301657 s, 13.6 MB/s 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:31.708 10:53:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:31.708 10:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:31.708 10:53:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:31.708 10:53:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:31.966 /dev/nbd1 00:10:31.966 10:53:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:31.966 10:53:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:31.966 1+0 records in 00:10:31.966 1+0 records out 00:10:31.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040773 s, 10.0 MB/s 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:31.966 10:53:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:31.966 10:53:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:31.966 10:53:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:31.966 10:53:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:31.966 10:53:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:31.966 10:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:32.226 { 00:10:32.226 "nbd_device": "/dev/nbd0", 00:10:32.226 "bdev_name": "Malloc0" 00:10:32.226 }, 00:10:32.226 { 00:10:32.226 "nbd_device": "/dev/nbd1", 00:10:32.226 "bdev_name": "Malloc1" 00:10:32.226 } 00:10:32.226 ]' 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:32.226 { 00:10:32.226 "nbd_device": "/dev/nbd0", 00:10:32.226 "bdev_name": "Malloc0" 00:10:32.226 }, 00:10:32.226 { 00:10:32.226 "nbd_device": "/dev/nbd1", 00:10:32.226 "bdev_name": "Malloc1" 00:10:32.226 } 00:10:32.226 ]' 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:32.226 /dev/nbd1' 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:32.226 /dev/nbd1' 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:32.226 256+0 records in 00:10:32.226 256+0 records out 00:10:32.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509345 s, 206 MB/s 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:32.226 10:53:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:32.226 256+0 records in 00:10:32.226 256+0 records out 00:10:32.226 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205932 s, 50.9 MB/s 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:32.486 256+0 records in 00:10:32.486 256+0 records out 00:10:32.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188081 s, 55.8 MB/s 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:32.486 10:53:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:32.487 10:53:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.487 10:53:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:32.487 10:53:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:32.487 10:53:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:32.487 10:53:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.487 10:53:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:32.746 10:53:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.006 10:53:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:33.006 10:54:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:33.006 10:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:33.006 10:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:33.265 10:54:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:33.265 10:54:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:33.531 10:54:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:33.531 [2024-12-05 10:54:00.565832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:33.531 [2024-12-05 10:54:00.618098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.531 [2024-12-05 10:54:00.618100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.531 [2024-12-05 10:54:00.660136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:33.531 [2024-12-05 10:54:00.660210] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:33.531 [2024-12-05 10:54:00.660221] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:36.816 10:54:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58273 /var/tmp/spdk-nbd.sock 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58273 ']' 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:36.816 10:54:03 event.app_repeat -- event/event.sh@39 -- # killprocess 58273 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58273 ']' 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58273 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58273 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.816 killing process with pid 58273 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58273' 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58273 00:10:36.816 10:54:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58273 00:10:36.816 spdk_app_start is called in Round 0. 00:10:36.816 Shutdown signal received, stop current app iteration 00:10:36.816 Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 reinitialization... 00:10:36.817 spdk_app_start is called in Round 1. 00:10:36.817 Shutdown signal received, stop current app iteration 00:10:36.817 Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 reinitialization... 00:10:36.817 spdk_app_start is called in Round 2. 00:10:36.817 Shutdown signal received, stop current app iteration 00:10:36.817 Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 reinitialization... 00:10:36.817 spdk_app_start is called in Round 3. 00:10:36.817 Shutdown signal received, stop current app iteration 00:10:36.817 10:54:03 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:36.817 10:54:03 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:36.817 00:10:36.817 real 0m17.761s 00:10:36.817 user 0m39.041s 00:10:36.817 sys 0m3.140s 00:10:36.817 10:54:03 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.817 10:54:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:36.817 ************************************ 00:10:36.817 END TEST app_repeat 00:10:36.817 ************************************ 00:10:36.817 10:54:03 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:36.817 10:54:03 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:36.817 10:54:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.817 10:54:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.817 10:54:03 event -- common/autotest_common.sh@10 -- # set +x 00:10:36.817 ************************************ 00:10:36.817 START TEST cpu_locks 00:10:36.817 ************************************ 00:10:36.817 10:54:03 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:37.075 * Looking for test storage... 00:10:37.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:37.075 10:54:04 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:37.075 10:54:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.076 10:54:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.076 --rc genhtml_branch_coverage=1 00:10:37.076 --rc genhtml_function_coverage=1 00:10:37.076 --rc genhtml_legend=1 00:10:37.076 --rc geninfo_all_blocks=1 00:10:37.076 --rc geninfo_unexecuted_blocks=1 00:10:37.076 00:10:37.076 ' 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.076 --rc genhtml_branch_coverage=1 00:10:37.076 --rc genhtml_function_coverage=1 00:10:37.076 --rc genhtml_legend=1 00:10:37.076 --rc geninfo_all_blocks=1 00:10:37.076 --rc geninfo_unexecuted_blocks=1 00:10:37.076 00:10:37.076 ' 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.076 --rc genhtml_branch_coverage=1 00:10:37.076 --rc genhtml_function_coverage=1 00:10:37.076 --rc genhtml_legend=1 00:10:37.076 --rc geninfo_all_blocks=1 00:10:37.076 --rc geninfo_unexecuted_blocks=1 00:10:37.076 00:10:37.076 ' 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:37.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.076 --rc genhtml_branch_coverage=1 00:10:37.076 --rc genhtml_function_coverage=1 00:10:37.076 --rc genhtml_legend=1 00:10:37.076 --rc geninfo_all_blocks=1 00:10:37.076 --rc geninfo_unexecuted_blocks=1 00:10:37.076 00:10:37.076 ' 00:10:37.076 10:54:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:37.076 10:54:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:37.076 10:54:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:37.076 10:54:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.076 10:54:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:37.076 ************************************ 00:10:37.076 START TEST default_locks 00:10:37.076 ************************************ 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58703 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58703 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58703 ']' 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.076 10:54:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:37.076 [2024-12-05 10:54:04.215705] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:37.076 [2024-12-05 10:54:04.215780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58703 ] 00:10:37.335 [2024-12-05 10:54:04.366346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.335 [2024-12-05 10:54:04.417331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.335 [2024-12-05 10:54:04.475459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.282 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.282 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:38.282 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58703 00:10:38.282 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58703 00:10:38.282 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58703 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58703 ']' 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58703 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58703 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.543 killing process with pid 58703 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58703' 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58703 00:10:38.543 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58703 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58703 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58703 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58703 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58703 ']' 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:38.802 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58703) - No such process 00:10:38.802 ERROR: process (pid: 58703) is no longer running 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:38.802 00:10:38.802 real 0m1.741s 00:10:38.802 user 0m1.858s 00:10:38.802 sys 0m0.540s 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.802 10:54:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:38.803 ************************************ 00:10:38.803 END TEST default_locks 00:10:38.803 ************************************ 00:10:38.803 10:54:05 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:38.803 10:54:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:38.803 10:54:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.803 10:54:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:39.061 ************************************ 00:10:39.061 START TEST default_locks_via_rpc 00:10:39.061 ************************************ 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58755 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58755 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58755 ']' 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.061 10:54:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.061 [2024-12-05 10:54:06.028718] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:39.061 [2024-12-05 10:54:06.028796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58755 ] 00:10:39.061 [2024-12-05 10:54:06.163416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.061 [2024-12-05 10:54:06.215746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.319 [2024-12-05 10:54:06.273559] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58755 00:10:39.884 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58755 00:10:39.885 10:54:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58755 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58755 ']' 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58755 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58755 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.451 killing process with pid 58755 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58755' 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58755 00:10:40.451 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58755 00:10:40.710 00:10:40.710 real 0m1.824s 00:10:40.710 user 0m1.950s 00:10:40.710 sys 0m0.571s 00:10:40.710 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.710 10:54:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.710 ************************************ 00:10:40.710 END TEST default_locks_via_rpc 00:10:40.710 ************************************ 00:10:40.710 10:54:07 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:40.710 10:54:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.710 10:54:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.710 10:54:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.710 ************************************ 00:10:40.710 START TEST non_locking_app_on_locked_coremask 00:10:40.710 ************************************ 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58800 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58800 /var/tmp/spdk.sock 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58800 ']' 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.710 10:54:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:40.967 [2024-12-05 10:54:07.925599] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:40.967 [2024-12-05 10:54:07.925668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58800 ] 00:10:40.967 [2024-12-05 10:54:08.078412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.967 [2024-12-05 10:54:08.125518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.225 [2024-12-05 10:54:08.181133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58816 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58816 /var/tmp/spdk2.sock 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58816 ']' 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.791 10:54:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:41.791 [2024-12-05 10:54:08.842971] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:41.791 [2024-12-05 10:54:08.843191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58816 ] 00:10:42.050 [2024-12-05 10:54:08.996047] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:42.050 [2024-12-05 10:54:08.996086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.050 [2024-12-05 10:54:09.085152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.050 [2024-12-05 10:54:09.192923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:42.616 10:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.616 10:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:42.616 10:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58800 00:10:42.616 10:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58800 00:10:42.616 10:54:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58800 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58800 ']' 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58800 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58800 00:10:43.638 killing process with pid 58800 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58800' 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58800 00:10:43.638 10:54:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58800 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58816 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58816 ']' 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58816 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58816 00:10:44.207 killing process with pid 58816 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58816' 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58816 00:10:44.207 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58816 00:10:44.776 00:10:44.776 real 0m3.798s 00:10:44.776 user 0m4.163s 00:10:44.776 sys 0m1.099s 00:10:44.776 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.776 ************************************ 00:10:44.776 END TEST non_locking_app_on_locked_coremask 00:10:44.776 ************************************ 00:10:44.776 10:54:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:44.776 10:54:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:44.776 10:54:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:44.776 10:54:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.776 10:54:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:44.777 ************************************ 00:10:44.777 START TEST locking_app_on_unlocked_coremask 00:10:44.777 ************************************ 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58883 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58883 /var/tmp/spdk.sock 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58883 ']' 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.777 10:54:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:44.777 [2024-12-05 10:54:11.790717] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:44.777 [2024-12-05 10:54:11.790794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58883 ] 00:10:44.777 [2024-12-05 10:54:11.924871] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:44.777 [2024-12-05 10:54:11.924919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.036 [2024-12-05 10:54:11.972702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.036 [2024-12-05 10:54:12.028048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.605 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.605 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:45.605 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:45.605 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58894 00:10:45.605 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58894 /var/tmp/spdk2.sock 00:10:45.605 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58894 ']' 00:10:45.606 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:45.606 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:45.606 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:45.606 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.606 10:54:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:45.606 [2024-12-05 10:54:12.760496] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:45.606 [2024-12-05 10:54:12.760684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58894 ] 00:10:45.865 [2024-12-05 10:54:12.911858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.865 [2024-12-05 10:54:13.009163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.125 [2024-12-05 10:54:13.125886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:46.693 10:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.693 10:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:46.693 10:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58894 00:10:46.693 10:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58894 00:10:46.693 10:54:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58883 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58883 ']' 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58883 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58883 00:10:47.631 killing process with pid 58883 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58883' 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58883 00:10:47.631 10:54:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58883 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58894 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58894 ']' 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58894 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58894 00:10:48.218 killing process with pid 58894 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58894' 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58894 00:10:48.218 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58894 00:10:48.476 00:10:48.476 real 0m3.894s 00:10:48.476 user 0m4.379s 00:10:48.476 sys 0m1.053s 00:10:48.476 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.476 10:54:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.476 ************************************ 00:10:48.476 END TEST locking_app_on_unlocked_coremask 00:10:48.476 ************************************ 00:10:48.735 10:54:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:48.735 10:54:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.735 10:54:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.735 10:54:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:48.735 ************************************ 00:10:48.735 START TEST locking_app_on_locked_coremask 00:10:48.735 ************************************ 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58961 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58961 /var/tmp/spdk.sock 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58961 ']' 00:10:48.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.735 10:54:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.735 [2024-12-05 10:54:15.755180] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:48.735 [2024-12-05 10:54:15.755457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58961 ] 00:10:48.995 [2024-12-05 10:54:15.904836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.995 [2024-12-05 10:54:15.956184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.995 [2024-12-05 10:54:16.012630] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58977 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58977 /var/tmp/spdk2.sock 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58977 /var/tmp/spdk2.sock 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 58977 /var/tmp/spdk2.sock 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58977 ']' 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:49.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.564 10:54:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:49.564 [2024-12-05 10:54:16.707329] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:49.564 [2024-12-05 10:54:16.707551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:10:49.823 [2024-12-05 10:54:16.855687] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58961 has claimed it. 00:10:49.823 [2024-12-05 10:54:16.855747] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:50.390 ERROR: process (pid: 58977) is no longer running 00:10:50.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58977) - No such process 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58961 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58961 00:10:50.390 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58961 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58961 ']' 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58961 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58961 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58961' 00:10:50.959 killing process with pid 58961 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58961 00:10:50.959 10:54:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58961 00:10:51.219 ************************************ 00:10:51.219 END TEST locking_app_on_locked_coremask 00:10:51.219 ************************************ 00:10:51.219 00:10:51.219 real 0m2.489s 00:10:51.219 user 0m2.833s 00:10:51.219 sys 0m0.624s 00:10:51.219 10:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.219 10:54:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:51.219 10:54:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:51.219 10:54:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.219 10:54:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.219 10:54:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:51.219 ************************************ 00:10:51.219 START TEST locking_overlapped_coremask 00:10:51.219 ************************************ 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59028 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59028 /var/tmp/spdk.sock 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59028 ']' 00:10:51.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.219 10:54:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:51.219 [2024-12-05 10:54:18.323812] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:51.219 [2024-12-05 10:54:18.323895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59028 ] 00:10:51.479 [2024-12-05 10:54:18.474568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.479 [2024-12-05 10:54:18.525151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.479 [2024-12-05 10:54:18.525266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.479 [2024-12-05 10:54:18.525267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.479 [2024-12-05 10:54:18.580717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59040 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59040 /var/tmp/spdk2.sock 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59040 /var/tmp/spdk2.sock 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59040 /var/tmp/spdk2.sock 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59040 ']' 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:52.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.416 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:52.416 [2024-12-05 10:54:19.251516] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:52.416 [2024-12-05 10:54:19.251584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59040 ] 00:10:52.416 [2024-12-05 10:54:19.401146] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59028 has claimed it. 00:10:52.416 [2024-12-05 10:54:19.401200] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:52.986 ERROR: process (pid: 59040) is no longer running 00:10:52.986 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59040) - No such process 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59028 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59028 ']' 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59028 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59028 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59028' 00:10:52.986 killing process with pid 59028 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59028 00:10:52.986 10:54:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59028 00:10:53.245 ************************************ 00:10:53.245 END TEST locking_overlapped_coremask 00:10:53.245 ************************************ 00:10:53.245 00:10:53.245 real 0m2.039s 00:10:53.245 user 0m5.699s 00:10:53.245 sys 0m0.391s 00:10:53.245 10:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.245 10:54:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:53.245 10:54:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:53.245 10:54:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:53.246 10:54:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.246 10:54:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:53.246 ************************************ 00:10:53.246 START TEST locking_overlapped_coremask_via_rpc 00:10:53.246 ************************************ 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59086 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59086 /var/tmp/spdk.sock 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59086 ']' 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.246 10:54:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.506 [2024-12-05 10:54:20.436482] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:53.506 [2024-12-05 10:54:20.436716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59086 ] 00:10:53.506 [2024-12-05 10:54:20.587211] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:53.506 [2024-12-05 10:54:20.587249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.506 [2024-12-05 10:54:20.632427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.506 [2024-12-05 10:54:20.632613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.506 [2024-12-05 10:54:20.632615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.779 [2024-12-05 10:54:20.686838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.360 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.360 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:54.360 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59103 00:10:54.360 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59103 /var/tmp/spdk2.sock 00:10:54.360 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:54.361 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59103 ']' 00:10:54.361 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:54.361 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.361 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:54.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:54.361 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.361 10:54:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.361 [2024-12-05 10:54:21.366034] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:54.361 [2024-12-05 10:54:21.366329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59103 ] 00:10:54.361 [2024-12-05 10:54:21.516810] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:54.361 [2024-12-05 10:54:21.516855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:54.619 [2024-12-05 10:54:21.612851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.619 [2024-12-05 10:54:21.616434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.619 [2024-12-05 10:54:21.616439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.619 [2024-12-05 10:54:21.722853] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.186 [2024-12-05 10:54:22.282372] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59086 has claimed it. 00:10:55.186 request: 00:10:55.186 { 00:10:55.186 "method": "framework_enable_cpumask_locks", 00:10:55.186 "req_id": 1 00:10:55.186 } 00:10:55.186 Got JSON-RPC error response 00:10:55.186 response: 00:10:55.186 { 00:10:55.186 "code": -32603, 00:10:55.186 "message": "Failed to claim CPU core: 2" 00:10:55.186 } 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59086 /var/tmp/spdk.sock 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59086 ']' 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.186 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59103 /var/tmp/spdk2.sock 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59103 ']' 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:55.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.444 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.703 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.703 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:55.703 ************************************ 00:10:55.703 END TEST locking_overlapped_coremask_via_rpc 00:10:55.703 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:55.703 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:55.703 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:55.703 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:55.703 00:10:55.703 real 0m2.365s 00:10:55.703 user 0m1.086s 00:10:55.703 sys 0m0.200s 00:10:55.703 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.703 10:54:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.703 ************************************ 00:10:55.703 10:54:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:55.703 10:54:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59086 ]] 00:10:55.703 10:54:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59086 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59086 ']' 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59086 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59086 00:10:55.703 killing process with pid 59086 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59086' 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59086 00:10:55.703 10:54:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59086 00:10:56.271 10:54:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59103 ]] 00:10:56.271 10:54:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59103 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59103 ']' 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59103 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59103 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59103' 00:10:56.271 killing process with pid 59103 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59103 00:10:56.271 10:54:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59103 00:10:56.530 10:54:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:56.530 10:54:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:56.530 10:54:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59086 ]] 00:10:56.530 10:54:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59086 00:10:56.530 Process with pid 59086 is not found 00:10:56.530 Process with pid 59103 is not found 00:10:56.530 10:54:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59086 ']' 00:10:56.530 10:54:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59086 00:10:56.530 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59086) - No such process 00:10:56.530 10:54:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59086 is not found' 00:10:56.530 10:54:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59103 ]] 00:10:56.530 10:54:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59103 00:10:56.530 10:54:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59103 ']' 00:10:56.530 10:54:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59103 00:10:56.530 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59103) - No such process 00:10:56.530 10:54:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59103 is not found' 00:10:56.530 10:54:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:56.530 00:10:56.530 real 0m19.626s 00:10:56.530 user 0m33.594s 00:10:56.530 sys 0m5.437s 00:10:56.530 10:54:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.530 10:54:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:56.530 ************************************ 00:10:56.530 END TEST cpu_locks 00:10:56.530 ************************************ 00:10:56.530 00:10:56.530 real 0m47.919s 00:10:56.530 user 1m32.478s 00:10:56.530 sys 0m9.579s 00:10:56.530 10:54:23 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.530 ************************************ 00:10:56.530 END TEST event 00:10:56.530 ************************************ 00:10:56.530 10:54:23 event -- common/autotest_common.sh@10 -- # set +x 00:10:56.530 10:54:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:56.530 10:54:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:56.530 10:54:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.530 10:54:23 -- common/autotest_common.sh@10 -- # set +x 00:10:56.530 ************************************ 00:10:56.530 START TEST thread 00:10:56.530 ************************************ 00:10:56.530 10:54:23 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:56.788 * Looking for test storage... 00:10:56.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:56.788 10:54:23 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:56.788 10:54:23 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:10:56.788 10:54:23 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:56.788 10:54:23 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:56.788 10:54:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.788 10:54:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.788 10:54:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.788 10:54:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.788 10:54:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.788 10:54:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.788 10:54:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.788 10:54:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.788 10:54:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.788 10:54:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.788 10:54:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.788 10:54:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:56.788 10:54:23 thread -- scripts/common.sh@345 -- # : 1 00:10:56.788 10:54:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.788 10:54:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.788 10:54:23 thread -- scripts/common.sh@365 -- # decimal 1 00:10:56.789 10:54:23 thread -- scripts/common.sh@353 -- # local d=1 00:10:56.789 10:54:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.789 10:54:23 thread -- scripts/common.sh@355 -- # echo 1 00:10:56.789 10:54:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.789 10:54:23 thread -- scripts/common.sh@366 -- # decimal 2 00:10:56.789 10:54:23 thread -- scripts/common.sh@353 -- # local d=2 00:10:56.789 10:54:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.789 10:54:23 thread -- scripts/common.sh@355 -- # echo 2 00:10:56.789 10:54:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.789 10:54:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.789 10:54:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.789 10:54:23 thread -- scripts/common.sh@368 -- # return 0 00:10:56.789 10:54:23 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.789 10:54:23 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:56.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.789 --rc genhtml_branch_coverage=1 00:10:56.789 --rc genhtml_function_coverage=1 00:10:56.789 --rc genhtml_legend=1 00:10:56.789 --rc geninfo_all_blocks=1 00:10:56.789 --rc geninfo_unexecuted_blocks=1 00:10:56.789 00:10:56.789 ' 00:10:56.789 10:54:23 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:56.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.789 --rc genhtml_branch_coverage=1 00:10:56.789 --rc genhtml_function_coverage=1 00:10:56.789 --rc genhtml_legend=1 00:10:56.789 --rc geninfo_all_blocks=1 00:10:56.789 --rc geninfo_unexecuted_blocks=1 00:10:56.789 00:10:56.789 ' 00:10:56.789 10:54:23 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:56.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.789 --rc genhtml_branch_coverage=1 00:10:56.789 --rc genhtml_function_coverage=1 00:10:56.789 --rc genhtml_legend=1 00:10:56.789 --rc geninfo_all_blocks=1 00:10:56.789 --rc geninfo_unexecuted_blocks=1 00:10:56.789 00:10:56.789 ' 00:10:56.789 10:54:23 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:56.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.789 --rc genhtml_branch_coverage=1 00:10:56.789 --rc genhtml_function_coverage=1 00:10:56.789 --rc genhtml_legend=1 00:10:56.789 --rc geninfo_all_blocks=1 00:10:56.789 --rc geninfo_unexecuted_blocks=1 00:10:56.789 00:10:56.789 ' 00:10:56.789 10:54:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:56.789 10:54:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:56.789 10:54:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.789 10:54:23 thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.789 ************************************ 00:10:56.789 START TEST thread_poller_perf 00:10:56.789 ************************************ 00:10:56.789 10:54:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:57.046 [2024-12-05 10:54:23.952030] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:57.046 [2024-12-05 10:54:23.952444] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59229 ] 00:10:57.046 [2024-12-05 10:54:24.107246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.046 [2024-12-05 10:54:24.159119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.046 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:58.418 [2024-12-05T10:54:25.577Z] ====================================== 00:10:58.418 [2024-12-05T10:54:25.577Z] busy:2498954008 (cyc) 00:10:58.418 [2024-12-05T10:54:25.577Z] total_run_count: 402000 00:10:58.418 [2024-12-05T10:54:25.577Z] tsc_hz: 2490000000 (cyc) 00:10:58.418 [2024-12-05T10:54:25.577Z] ====================================== 00:10:58.418 [2024-12-05T10:54:25.577Z] poller_cost: 6216 (cyc), 2496 (nsec) 00:10:58.418 00:10:58.418 ************************************ 00:10:58.418 END TEST thread_poller_perf 00:10:58.418 ************************************ 00:10:58.418 real 0m1.283s 00:10:58.418 user 0m1.130s 00:10:58.418 sys 0m0.046s 00:10:58.418 10:54:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.418 10:54:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:58.418 10:54:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:58.418 10:54:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:58.418 10:54:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.418 10:54:25 thread -- common/autotest_common.sh@10 -- # set +x 00:10:58.418 ************************************ 00:10:58.418 START TEST thread_poller_perf 00:10:58.418 ************************************ 00:10:58.418 10:54:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:58.418 [2024-12-05 10:54:25.308337] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:10:58.418 [2024-12-05 10:54:25.308444] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59259 ] 00:10:58.418 [2024-12-05 10:54:25.459179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.418 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:58.418 [2024-12-05 10:54:25.503661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.792 [2024-12-05T10:54:26.951Z] ====================================== 00:10:59.792 [2024-12-05T10:54:26.951Z] busy:2492073678 (cyc) 00:10:59.792 [2024-12-05T10:54:26.951Z] total_run_count: 5323000 00:10:59.792 [2024-12-05T10:54:26.951Z] tsc_hz: 2490000000 (cyc) 00:10:59.792 [2024-12-05T10:54:26.951Z] ====================================== 00:10:59.792 [2024-12-05T10:54:26.951Z] poller_cost: 468 (cyc), 187 (nsec) 00:10:59.792 00:10:59.792 real 0m1.261s 00:10:59.792 user 0m1.110s 00:10:59.792 sys 0m0.046s 00:10:59.792 10:54:26 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.792 ************************************ 00:10:59.792 END TEST thread_poller_perf 00:10:59.792 ************************************ 00:10:59.792 10:54:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:59.792 10:54:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:59.792 00:10:59.792 real 0m2.925s 00:10:59.792 user 0m2.409s 00:10:59.792 sys 0m0.305s 00:10:59.792 10:54:26 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.792 ************************************ 00:10:59.792 END TEST thread 00:10:59.792 10:54:26 thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.792 ************************************ 00:10:59.792 10:54:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:59.792 10:54:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:59.792 10:54:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.792 10:54:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.792 10:54:26 -- common/autotest_common.sh@10 -- # set +x 00:10:59.792 ************************************ 00:10:59.792 START TEST app_cmdline 00:10:59.792 ************************************ 00:10:59.792 10:54:26 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:59.792 * Looking for test storage... 00:10:59.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:59.792 10:54:26 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.792 10:54:26 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.792 10:54:26 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.792 10:54:26 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.792 10:54:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.793 10:54:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.793 --rc genhtml_branch_coverage=1 00:10:59.793 --rc genhtml_function_coverage=1 00:10:59.793 --rc genhtml_legend=1 00:10:59.793 --rc geninfo_all_blocks=1 00:10:59.793 --rc geninfo_unexecuted_blocks=1 00:10:59.793 00:10:59.793 ' 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.793 --rc genhtml_branch_coverage=1 00:10:59.793 --rc genhtml_function_coverage=1 00:10:59.793 --rc genhtml_legend=1 00:10:59.793 --rc geninfo_all_blocks=1 00:10:59.793 --rc geninfo_unexecuted_blocks=1 00:10:59.793 00:10:59.793 ' 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.793 --rc genhtml_branch_coverage=1 00:10:59.793 --rc genhtml_function_coverage=1 00:10:59.793 --rc genhtml_legend=1 00:10:59.793 --rc geninfo_all_blocks=1 00:10:59.793 --rc geninfo_unexecuted_blocks=1 00:10:59.793 00:10:59.793 ' 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.793 --rc genhtml_branch_coverage=1 00:10:59.793 --rc genhtml_function_coverage=1 00:10:59.793 --rc genhtml_legend=1 00:10:59.793 --rc geninfo_all_blocks=1 00:10:59.793 --rc geninfo_unexecuted_blocks=1 00:10:59.793 00:10:59.793 ' 00:10:59.793 10:54:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:59.793 10:54:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59347 00:10:59.793 10:54:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:59.793 10:54:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59347 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59347 ']' 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.793 10:54:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:00.051 [2024-12-05 10:54:26.961509] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:00.051 [2024-12-05 10:54:26.961711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59347 ] 00:11:00.051 [2024-12-05 10:54:27.113093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.051 [2024-12-05 10:54:27.159599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.307 [2024-12-05 10:54:27.215527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:00.873 10:54:27 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:00.873 10:54:27 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:11:00.873 10:54:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:01.131 { 00:11:01.131 "version": "SPDK v25.01-pre git sha1 3a4e432ea", 00:11:01.131 "fields": { 00:11:01.131 "major": 25, 00:11:01.131 "minor": 1, 00:11:01.131 "patch": 0, 00:11:01.131 "suffix": "-pre", 00:11:01.131 "commit": "3a4e432ea" 00:11:01.131 } 00:11:01.131 } 00:11:01.131 10:54:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:01.131 10:54:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:01.131 10:54:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:01.131 10:54:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:01.131 10:54:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.132 10:54:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:01.132 10:54:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.132 10:54:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:01.132 10:54:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:01.132 10:54:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:01.132 10:54:28 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:01.390 request: 00:11:01.390 { 00:11:01.390 "method": "env_dpdk_get_mem_stats", 00:11:01.390 "req_id": 1 00:11:01.390 } 00:11:01.390 Got JSON-RPC error response 00:11:01.390 response: 00:11:01.390 { 00:11:01.390 "code": -32601, 00:11:01.390 "message": "Method not found" 00:11:01.390 } 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:01.390 10:54:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59347 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59347 ']' 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59347 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59347 00:11:01.390 killing process with pid 59347 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59347' 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@973 -- # kill 59347 00:11:01.390 10:54:28 app_cmdline -- common/autotest_common.sh@978 -- # wait 59347 00:11:01.647 00:11:01.647 real 0m2.015s 00:11:01.647 user 0m2.342s 00:11:01.647 sys 0m0.530s 00:11:01.647 10:54:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.647 ************************************ 00:11:01.648 END TEST app_cmdline 00:11:01.648 ************************************ 00:11:01.648 10:54:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:01.648 10:54:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:01.648 10:54:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.648 10:54:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.648 10:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:01.648 ************************************ 00:11:01.648 START TEST version 00:11:01.648 ************************************ 00:11:01.648 10:54:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:01.907 * Looking for test storage... 00:11:01.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.907 10:54:28 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.907 10:54:28 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.907 10:54:28 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.907 10:54:28 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.907 10:54:28 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.907 10:54:28 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.907 10:54:28 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.907 10:54:28 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.907 10:54:28 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.907 10:54:28 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.907 10:54:28 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.907 10:54:28 version -- scripts/common.sh@344 -- # case "$op" in 00:11:01.907 10:54:28 version -- scripts/common.sh@345 -- # : 1 00:11:01.907 10:54:28 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.907 10:54:28 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.907 10:54:28 version -- scripts/common.sh@365 -- # decimal 1 00:11:01.907 10:54:28 version -- scripts/common.sh@353 -- # local d=1 00:11:01.907 10:54:28 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.907 10:54:28 version -- scripts/common.sh@355 -- # echo 1 00:11:01.907 10:54:28 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.907 10:54:28 version -- scripts/common.sh@366 -- # decimal 2 00:11:01.907 10:54:28 version -- scripts/common.sh@353 -- # local d=2 00:11:01.907 10:54:28 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.907 10:54:28 version -- scripts/common.sh@355 -- # echo 2 00:11:01.907 10:54:28 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.907 10:54:28 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.907 10:54:28 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.907 10:54:28 version -- scripts/common.sh@368 -- # return 0 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.907 --rc genhtml_branch_coverage=1 00:11:01.907 --rc genhtml_function_coverage=1 00:11:01.907 --rc genhtml_legend=1 00:11:01.907 --rc geninfo_all_blocks=1 00:11:01.907 --rc geninfo_unexecuted_blocks=1 00:11:01.907 00:11:01.907 ' 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.907 --rc genhtml_branch_coverage=1 00:11:01.907 --rc genhtml_function_coverage=1 00:11:01.907 --rc genhtml_legend=1 00:11:01.907 --rc geninfo_all_blocks=1 00:11:01.907 --rc geninfo_unexecuted_blocks=1 00:11:01.907 00:11:01.907 ' 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.907 --rc genhtml_branch_coverage=1 00:11:01.907 --rc genhtml_function_coverage=1 00:11:01.907 --rc genhtml_legend=1 00:11:01.907 --rc geninfo_all_blocks=1 00:11:01.907 --rc geninfo_unexecuted_blocks=1 00:11:01.907 00:11:01.907 ' 00:11:01.907 10:54:28 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.907 --rc genhtml_branch_coverage=1 00:11:01.907 --rc genhtml_function_coverage=1 00:11:01.907 --rc genhtml_legend=1 00:11:01.907 --rc geninfo_all_blocks=1 00:11:01.907 --rc geninfo_unexecuted_blocks=1 00:11:01.907 00:11:01.907 ' 00:11:01.907 10:54:28 version -- app/version.sh@17 -- # get_header_version major 00:11:01.907 10:54:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:01.907 10:54:29 version -- app/version.sh@14 -- # cut -f2 00:11:01.907 10:54:29 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.907 10:54:29 version -- app/version.sh@17 -- # major=25 00:11:01.907 10:54:29 version -- app/version.sh@18 -- # get_header_version minor 00:11:01.907 10:54:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:01.907 10:54:29 version -- app/version.sh@14 -- # cut -f2 00:11:01.907 10:54:29 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.907 10:54:29 version -- app/version.sh@18 -- # minor=1 00:11:01.907 10:54:29 version -- app/version.sh@19 -- # get_header_version patch 00:11:01.907 10:54:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:01.907 10:54:29 version -- app/version.sh@14 -- # cut -f2 00:11:01.907 10:54:29 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.907 10:54:29 version -- app/version.sh@19 -- # patch=0 00:11:01.907 10:54:29 version -- app/version.sh@20 -- # get_header_version suffix 00:11:01.907 10:54:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:01.907 10:54:29 version -- app/version.sh@14 -- # cut -f2 00:11:01.907 10:54:29 version -- app/version.sh@14 -- # tr -d '"' 00:11:01.907 10:54:29 version -- app/version.sh@20 -- # suffix=-pre 00:11:01.907 10:54:29 version -- app/version.sh@22 -- # version=25.1 00:11:01.907 10:54:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:01.907 10:54:29 version -- app/version.sh@28 -- # version=25.1rc0 00:11:01.907 10:54:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:01.907 10:54:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:02.166 10:54:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:02.166 10:54:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:02.166 00:11:02.166 real 0m0.336s 00:11:02.166 user 0m0.203s 00:11:02.166 sys 0m0.193s 00:11:02.166 ************************************ 00:11:02.166 END TEST version 00:11:02.166 ************************************ 00:11:02.166 10:54:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.166 10:54:29 version -- common/autotest_common.sh@10 -- # set +x 00:11:02.166 10:54:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:02.166 10:54:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:11:02.166 10:54:29 -- spdk/autotest.sh@194 -- # uname -s 00:11:02.166 10:54:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:02.166 10:54:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:02.166 10:54:29 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:11:02.166 10:54:29 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:11:02.166 10:54:29 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:11:02.166 10:54:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:02.166 10:54:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.166 10:54:29 -- common/autotest_common.sh@10 -- # set +x 00:11:02.166 ************************************ 00:11:02.166 START TEST spdk_dd 00:11:02.166 ************************************ 00:11:02.166 10:54:29 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:11:02.166 * Looking for test storage... 00:11:02.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:02.166 10:54:29 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.166 10:54:29 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.166 10:54:29 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.425 10:54:29 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@345 -- # : 1 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@368 -- # return 0 00:11:02.425 10:54:29 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.425 10:54:29 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.425 --rc genhtml_branch_coverage=1 00:11:02.425 --rc genhtml_function_coverage=1 00:11:02.425 --rc genhtml_legend=1 00:11:02.425 --rc geninfo_all_blocks=1 00:11:02.425 --rc geninfo_unexecuted_blocks=1 00:11:02.425 00:11:02.425 ' 00:11:02.425 10:54:29 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.425 --rc genhtml_branch_coverage=1 00:11:02.425 --rc genhtml_function_coverage=1 00:11:02.425 --rc genhtml_legend=1 00:11:02.425 --rc geninfo_all_blocks=1 00:11:02.425 --rc geninfo_unexecuted_blocks=1 00:11:02.425 00:11:02.425 ' 00:11:02.425 10:54:29 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.425 --rc genhtml_branch_coverage=1 00:11:02.425 --rc genhtml_function_coverage=1 00:11:02.425 --rc genhtml_legend=1 00:11:02.425 --rc geninfo_all_blocks=1 00:11:02.425 --rc geninfo_unexecuted_blocks=1 00:11:02.425 00:11:02.425 ' 00:11:02.425 10:54:29 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.425 --rc genhtml_branch_coverage=1 00:11:02.425 --rc genhtml_function_coverage=1 00:11:02.425 --rc genhtml_legend=1 00:11:02.425 --rc geninfo_all_blocks=1 00:11:02.425 --rc geninfo_unexecuted_blocks=1 00:11:02.425 00:11:02.425 ' 00:11:02.425 10:54:29 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.425 10:54:29 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.426 10:54:29 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.426 10:54:29 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.426 10:54:29 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.426 10:54:29 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.426 10:54:29 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.426 10:54:29 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.426 10:54:29 spdk_dd -- paths/export.sh@5 -- # export PATH 00:11:02.426 10:54:29 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.426 10:54:29 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:02.993 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:02.993 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:02.993 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:02.993 10:54:30 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:11:02.993 10:54:30 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@233 -- # local class 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@235 -- # local progif 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@236 -- # class=01 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@18 -- # local i 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@27 -- # return 0 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@18 -- # local i 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@27 -- # return 0 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:11:02.993 10:54:30 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:02.993 10:54:30 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@139 -- # local lib 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.993 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:11:02.994 * spdk_dd linked to liburing 00:11:02.994 10:54:30 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:02.995 10:54:30 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:02.995 10:54:30 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:03.254 10:54:30 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:11:03.254 10:54:30 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:11:03.254 10:54:30 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:11:03.254 10:54:30 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:11:03.254 10:54:30 spdk_dd -- dd/common.sh@153 -- # return 0 00:11:03.254 10:54:30 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:11:03.254 10:54:30 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:11:03.254 10:54:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.254 10:54:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.254 10:54:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:03.254 ************************************ 00:11:03.254 START TEST spdk_dd_basic_rw 00:11:03.254 ************************************ 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:11:03.254 * Looking for test storage... 00:11:03.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.254 --rc genhtml_branch_coverage=1 00:11:03.254 --rc genhtml_function_coverage=1 00:11:03.254 --rc genhtml_legend=1 00:11:03.254 --rc geninfo_all_blocks=1 00:11:03.254 --rc geninfo_unexecuted_blocks=1 00:11:03.254 00:11:03.254 ' 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.254 --rc genhtml_branch_coverage=1 00:11:03.254 --rc genhtml_function_coverage=1 00:11:03.254 --rc genhtml_legend=1 00:11:03.254 --rc geninfo_all_blocks=1 00:11:03.254 --rc geninfo_unexecuted_blocks=1 00:11:03.254 00:11:03.254 ' 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.254 --rc genhtml_branch_coverage=1 00:11:03.254 --rc genhtml_function_coverage=1 00:11:03.254 --rc genhtml_legend=1 00:11:03.254 --rc geninfo_all_blocks=1 00:11:03.254 --rc geninfo_unexecuted_blocks=1 00:11:03.254 00:11:03.254 ' 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.254 --rc genhtml_branch_coverage=1 00:11:03.254 --rc genhtml_function_coverage=1 00:11:03.254 --rc genhtml_legend=1 00:11:03.254 --rc geninfo_all_blocks=1 00:11:03.254 --rc geninfo_unexecuted_blocks=1 00:11:03.254 00:11:03.254 ' 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.254 10:54:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:03.255 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:11:03.535 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:11:03.535 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:11:03.535 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:11:03.536 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:11:03.536 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:03.537 ************************************ 00:11:03.537 START TEST dd_bs_lt_native_bs 00:11:03.537 ************************************ 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:03.537 10:54:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:11:03.537 { 00:11:03.537 "subsystems": [ 00:11:03.537 { 00:11:03.537 "subsystem": "bdev", 00:11:03.537 "config": [ 00:11:03.537 { 00:11:03.537 "params": { 00:11:03.537 "trtype": "pcie", 00:11:03.537 "traddr": "0000:00:10.0", 00:11:03.537 "name": "Nvme0" 00:11:03.537 }, 00:11:03.537 "method": "bdev_nvme_attach_controller" 00:11:03.537 }, 00:11:03.537 { 00:11:03.537 "method": "bdev_wait_for_examine" 00:11:03.537 } 00:11:03.537 ] 00:11:03.537 } 00:11:03.537 ] 00:11:03.537 } 00:11:03.795 [2024-12-05 10:54:30.708052] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:03.795 [2024-12-05 10:54:30.708117] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59699 ] 00:11:03.795 [2024-12-05 10:54:30.859305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.795 [2024-12-05 10:54:30.909333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.795 [2024-12-05 10:54:30.951237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:04.053 [2024-12-05 10:54:31.053722] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:11:04.053 [2024-12-05 10:54:31.053786] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.053 [2024-12-05 10:54:31.160027] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:11:04.312 ************************************ 00:11:04.312 END TEST dd_bs_lt_native_bs 00:11:04.312 ************************************ 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:04.312 00:11:04.312 real 0m0.570s 00:11:04.312 user 0m0.381s 00:11:04.312 sys 0m0.147s 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:04.312 ************************************ 00:11:04.312 START TEST dd_rw 00:11:04.312 ************************************ 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:04.312 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:04.880 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:11:04.880 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:04.880 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:04.880 10:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:04.880 [2024-12-05 10:54:31.866460] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:04.880 [2024-12-05 10:54:31.866689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59735 ] 00:11:04.880 { 00:11:04.880 "subsystems": [ 00:11:04.880 { 00:11:04.880 "subsystem": "bdev", 00:11:04.880 "config": [ 00:11:04.880 { 00:11:04.880 "params": { 00:11:04.880 "trtype": "pcie", 00:11:04.880 "traddr": "0000:00:10.0", 00:11:04.880 "name": "Nvme0" 00:11:04.880 }, 00:11:04.880 "method": "bdev_nvme_attach_controller" 00:11:04.880 }, 00:11:04.880 { 00:11:04.880 "method": "bdev_wait_for_examine" 00:11:04.880 } 00:11:04.880 ] 00:11:04.880 } 00:11:04.880 ] 00:11:04.880 } 00:11:04.880 [2024-12-05 10:54:32.015189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.140 [2024-12-05 10:54:32.066316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.140 [2024-12-05 10:54:32.107954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.140  [2024-12-05T10:54:32.558Z] Copying: 60/60 [kB] (average 29 MBps) 00:11:05.399 00:11:05.399 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:11:05.399 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:05.399 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:05.399 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:05.399 { 00:11:05.399 "subsystems": [ 00:11:05.399 { 00:11:05.399 "subsystem": "bdev", 00:11:05.399 "config": [ 00:11:05.399 { 00:11:05.399 "params": { 00:11:05.399 "trtype": "pcie", 00:11:05.399 "traddr": "0000:00:10.0", 00:11:05.399 "name": "Nvme0" 00:11:05.399 }, 00:11:05.399 "method": "bdev_nvme_attach_controller" 00:11:05.399 }, 00:11:05.399 { 00:11:05.399 "method": "bdev_wait_for_examine" 00:11:05.399 } 00:11:05.399 ] 00:11:05.399 } 00:11:05.399 ] 00:11:05.399 } 00:11:05.399 [2024-12-05 10:54:32.428079] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:05.399 [2024-12-05 10:54:32.428152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59743 ] 00:11:05.658 [2024-12-05 10:54:32.580946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.658 [2024-12-05 10:54:32.633191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.658 [2024-12-05 10:54:32.675713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.658  [2024-12-05T10:54:33.076Z] Copying: 60/60 [kB] (average 19 MBps) 00:11:05.917 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:05.917 10:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:05.917 [2024-12-05 10:54:32.997514] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:05.917 [2024-12-05 10:54:32.997749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59764 ] 00:11:05.917 { 00:11:05.917 "subsystems": [ 00:11:05.917 { 00:11:05.917 "subsystem": "bdev", 00:11:05.917 "config": [ 00:11:05.917 { 00:11:05.917 "params": { 00:11:05.917 "trtype": "pcie", 00:11:05.917 "traddr": "0000:00:10.0", 00:11:05.917 "name": "Nvme0" 00:11:05.917 }, 00:11:05.917 "method": "bdev_nvme_attach_controller" 00:11:05.917 }, 00:11:05.917 { 00:11:05.917 "method": "bdev_wait_for_examine" 00:11:05.917 } 00:11:05.917 ] 00:11:05.917 } 00:11:05.917 ] 00:11:05.917 } 00:11:06.176 [2024-12-05 10:54:33.149710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.176 [2024-12-05 10:54:33.194321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.176 [2024-12-05 10:54:33.235652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:06.435  [2024-12-05T10:54:33.594Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:06.435 00:11:06.435 10:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:06.435 10:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:11:06.435 10:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:11:06.435 10:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:11:06.435 10:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:11:06.435 10:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:06.435 10:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:07.005 10:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:11:07.005 10:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:07.005 10:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:07.005 10:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:07.005 { 00:11:07.005 "subsystems": [ 00:11:07.005 { 00:11:07.005 "subsystem": "bdev", 00:11:07.005 "config": [ 00:11:07.005 { 00:11:07.005 "params": { 00:11:07.005 "trtype": "pcie", 00:11:07.005 "traddr": "0000:00:10.0", 00:11:07.005 "name": "Nvme0" 00:11:07.005 }, 00:11:07.005 "method": "bdev_nvme_attach_controller" 00:11:07.005 }, 00:11:07.005 { 00:11:07.005 "method": "bdev_wait_for_examine" 00:11:07.005 } 00:11:07.005 ] 00:11:07.005 } 00:11:07.005 ] 00:11:07.005 } 00:11:07.005 [2024-12-05 10:54:34.079371] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:07.005 [2024-12-05 10:54:34.079439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59783 ] 00:11:07.264 [2024-12-05 10:54:34.221798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.264 [2024-12-05 10:54:34.266171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.264 [2024-12-05 10:54:34.307943] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.264  [2024-12-05T10:54:34.682Z] Copying: 60/60 [kB] (average 58 MBps) 00:11:07.523 00:11:07.523 10:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:11:07.523 10:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:07.523 10:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:07.523 10:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:07.523 { 00:11:07.523 "subsystems": [ 00:11:07.523 { 00:11:07.523 "subsystem": "bdev", 00:11:07.523 "config": [ 00:11:07.523 { 00:11:07.523 "params": { 00:11:07.523 "trtype": "pcie", 00:11:07.523 "traddr": "0000:00:10.0", 00:11:07.523 "name": "Nvme0" 00:11:07.523 }, 00:11:07.523 "method": "bdev_nvme_attach_controller" 00:11:07.523 }, 00:11:07.523 { 00:11:07.523 "method": "bdev_wait_for_examine" 00:11:07.523 } 00:11:07.523 ] 00:11:07.523 } 00:11:07.523 ] 00:11:07.523 } 00:11:07.523 [2024-12-05 10:54:34.627794] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:07.523 [2024-12-05 10:54:34.627875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59791 ] 00:11:07.783 [2024-12-05 10:54:34.777857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.783 [2024-12-05 10:54:34.824223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.783 [2024-12-05 10:54:34.866931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.049  [2024-12-05T10:54:35.208Z] Copying: 60/60 [kB] (average 58 MBps) 00:11:08.049 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:08.049 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:08.049 { 00:11:08.049 "subsystems": [ 00:11:08.049 { 00:11:08.050 "subsystem": "bdev", 00:11:08.050 "config": [ 00:11:08.050 { 00:11:08.050 "params": { 00:11:08.050 "trtype": "pcie", 00:11:08.050 "traddr": "0000:00:10.0", 00:11:08.050 "name": "Nvme0" 00:11:08.050 }, 00:11:08.050 "method": "bdev_nvme_attach_controller" 00:11:08.050 }, 00:11:08.050 { 00:11:08.050 "method": "bdev_wait_for_examine" 00:11:08.050 } 00:11:08.050 ] 00:11:08.050 } 00:11:08.050 ] 00:11:08.050 } 00:11:08.050 [2024-12-05 10:54:35.186366] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:08.050 [2024-12-05 10:54:35.186433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59814 ] 00:11:08.308 [2024-12-05 10:54:35.335971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.308 [2024-12-05 10:54:35.386009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.308 [2024-12-05 10:54:35.427587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.566  [2024-12-05T10:54:35.725Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:08.566 00:11:08.566 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:08.566 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:08.566 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:11:08.566 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:11:08.566 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:11:08.566 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:11:08.566 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:08.566 10:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:09.134 10:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:11:09.134 10:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:09.134 10:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:09.134 10:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:09.134 [2024-12-05 10:54:36.247167] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:09.134 [2024-12-05 10:54:36.247243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59833 ] 00:11:09.134 { 00:11:09.134 "subsystems": [ 00:11:09.134 { 00:11:09.134 "subsystem": "bdev", 00:11:09.134 "config": [ 00:11:09.134 { 00:11:09.134 "params": { 00:11:09.134 "trtype": "pcie", 00:11:09.134 "traddr": "0000:00:10.0", 00:11:09.134 "name": "Nvme0" 00:11:09.134 }, 00:11:09.134 "method": "bdev_nvme_attach_controller" 00:11:09.134 }, 00:11:09.134 { 00:11:09.134 "method": "bdev_wait_for_examine" 00:11:09.134 } 00:11:09.134 ] 00:11:09.134 } 00:11:09.134 ] 00:11:09.134 } 00:11:09.393 [2024-12-05 10:54:36.399312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.393 [2024-12-05 10:54:36.446997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.393 [2024-12-05 10:54:36.488181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.653  [2024-12-05T10:54:36.812Z] Copying: 56/56 [kB] (average 54 MBps) 00:11:09.653 00:11:09.653 10:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:09.653 10:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:09.653 10:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:11:09.653 10:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:09.653 [2024-12-05 10:54:36.790972] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:09.653 [2024-12-05 10:54:36.791169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59843 ] 00:11:09.653 { 00:11:09.653 "subsystems": [ 00:11:09.653 { 00:11:09.653 "subsystem": "bdev", 00:11:09.653 "config": [ 00:11:09.653 { 00:11:09.653 "params": { 00:11:09.653 "trtype": "pcie", 00:11:09.653 "traddr": "0000:00:10.0", 00:11:09.653 "name": "Nvme0" 00:11:09.653 }, 00:11:09.653 "method": "bdev_nvme_attach_controller" 00:11:09.653 }, 00:11:09.653 { 00:11:09.653 "method": "bdev_wait_for_examine" 00:11:09.653 } 00:11:09.653 ] 00:11:09.653 } 00:11:09.653 ] 00:11:09.653 } 00:11:09.912 [2024-12-05 10:54:36.941682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.912 [2024-12-05 10:54:36.990979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.912 [2024-12-05 10:54:37.033530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.171  [2024-12-05T10:54:37.330Z] Copying: 56/56 [kB] (average 54 MBps) 00:11:10.171 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:10.171 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:10.429 [2024-12-05 10:54:37.361884] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:10.429 [2024-12-05 10:54:37.362082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:{ 00:11:10.429 "subsystems": [ 00:11:10.429 { 00:11:10.429 "subsystem": "bdev", 00:11:10.429 "config": [ 00:11:10.429 { 00:11:10.429 "params": { 00:11:10.429 "trtype": "pcie", 00:11:10.429 "traddr": "0000:00:10.0", 00:11:10.429 "name": "Nvme0" 00:11:10.429 }, 00:11:10.429 "method": "bdev_nvme_attach_controller" 00:11:10.429 }, 00:11:10.429 { 00:11:10.429 "method": "bdev_wait_for_examine" 00:11:10.429 } 00:11:10.429 ] 00:11:10.429 } 00:11:10.429 ] 00:11:10.429 } 00:11:10.429 6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59862 ] 00:11:10.429 [2024-12-05 10:54:37.512341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.429 [2024-12-05 10:54:37.562736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.687 [2024-12-05 10:54:37.605031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:10.687  [2024-12-05T10:54:38.104Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:10.945 00:11:10.945 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:10.945 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:11:10.945 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:11:10.945 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:11:10.945 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:11:10.945 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:10.945 10:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:11.203 10:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:11:11.203 10:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:11.203 10:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:11.203 10:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:11.461 [2024-12-05 10:54:38.391164] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:11.461 [2024-12-05 10:54:38.391265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:11:11.461 { 00:11:11.462 "subsystems": [ 00:11:11.462 { 00:11:11.462 "subsystem": "bdev", 00:11:11.462 "config": [ 00:11:11.462 { 00:11:11.462 "params": { 00:11:11.462 "trtype": "pcie", 00:11:11.462 "traddr": "0000:00:10.0", 00:11:11.462 "name": "Nvme0" 00:11:11.462 }, 00:11:11.462 "method": "bdev_nvme_attach_controller" 00:11:11.462 }, 00:11:11.462 { 00:11:11.462 "method": "bdev_wait_for_examine" 00:11:11.462 } 00:11:11.462 ] 00:11:11.462 } 00:11:11.462 ] 00:11:11.462 } 00:11:11.462 [2024-12-05 10:54:38.548565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.462 [2024-12-05 10:54:38.596811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.720 [2024-12-05 10:54:38.638747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:11.720  [2024-12-05T10:54:39.138Z] Copying: 56/56 [kB] (average 54 MBps) 00:11:11.979 00:11:11.979 10:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:11:11.979 10:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:11.979 10:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:11.979 10:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:11.979 [2024-12-05 10:54:38.967511] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:11.979 [2024-12-05 10:54:38.967720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59895 ] 00:11:11.979 { 00:11:11.979 "subsystems": [ 00:11:11.979 { 00:11:11.979 "subsystem": "bdev", 00:11:11.979 "config": [ 00:11:11.979 { 00:11:11.979 "params": { 00:11:11.979 "trtype": "pcie", 00:11:11.979 "traddr": "0000:00:10.0", 00:11:11.979 "name": "Nvme0" 00:11:11.979 }, 00:11:11.979 "method": "bdev_nvme_attach_controller" 00:11:11.979 }, 00:11:11.979 { 00:11:11.979 "method": "bdev_wait_for_examine" 00:11:11.979 } 00:11:11.979 ] 00:11:11.979 } 00:11:11.979 ] 00:11:11.979 } 00:11:11.979 [2024-12-05 10:54:39.117791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.237 [2024-12-05 10:54:39.167154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.237 [2024-12-05 10:54:39.209421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.237  [2024-12-05T10:54:39.663Z] Copying: 56/56 [kB] (average 54 MBps) 00:11:12.504 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:12.504 10:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:12.504 { 00:11:12.504 "subsystems": [ 00:11:12.504 { 00:11:12.504 "subsystem": "bdev", 00:11:12.504 "config": [ 00:11:12.504 { 00:11:12.504 "params": { 00:11:12.504 "trtype": "pcie", 00:11:12.505 "traddr": "0000:00:10.0", 00:11:12.505 "name": "Nvme0" 00:11:12.505 }, 00:11:12.505 "method": "bdev_nvme_attach_controller" 00:11:12.505 }, 00:11:12.505 { 00:11:12.505 "method": "bdev_wait_for_examine" 00:11:12.505 } 00:11:12.505 ] 00:11:12.505 } 00:11:12.505 ] 00:11:12.505 } 00:11:12.505 [2024-12-05 10:54:39.540065] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:12.505 [2024-12-05 10:54:39.540241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59910 ] 00:11:12.762 [2024-12-05 10:54:39.688241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.762 [2024-12-05 10:54:39.731160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.762 [2024-12-05 10:54:39.773906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.762  [2024-12-05T10:54:40.180Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:13.021 00:11:13.021 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:11:13.021 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:13.021 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:11:13.021 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:11:13.021 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:11:13.021 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:11:13.021 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:13.021 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:13.586 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:11:13.586 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:13.586 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:13.586 10:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:13.586 [2024-12-05 10:54:40.523559] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:13.586 [2024-12-05 10:54:40.524215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59929 ] 00:11:13.586 { 00:11:13.586 "subsystems": [ 00:11:13.586 { 00:11:13.586 "subsystem": "bdev", 00:11:13.586 "config": [ 00:11:13.586 { 00:11:13.586 "params": { 00:11:13.586 "trtype": "pcie", 00:11:13.586 "traddr": "0000:00:10.0", 00:11:13.586 "name": "Nvme0" 00:11:13.586 }, 00:11:13.586 "method": "bdev_nvme_attach_controller" 00:11:13.586 }, 00:11:13.586 { 00:11:13.586 "method": "bdev_wait_for_examine" 00:11:13.586 } 00:11:13.586 ] 00:11:13.586 } 00:11:13.586 ] 00:11:13.586 } 00:11:13.586 [2024-12-05 10:54:40.673118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.586 [2024-12-05 10:54:40.718679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.842 [2024-12-05 10:54:40.766901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:13.842  [2024-12-05T10:54:41.257Z] Copying: 48/48 [kB] (average 46 MBps) 00:11:14.098 00:11:14.098 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:14.099 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:14.099 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:11:14.099 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:14.099 [2024-12-05 10:54:41.088758] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:14.099 [2024-12-05 10:54:41.089319] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59943 ] 00:11:14.099 { 00:11:14.099 "subsystems": [ 00:11:14.099 { 00:11:14.099 "subsystem": "bdev", 00:11:14.099 "config": [ 00:11:14.099 { 00:11:14.099 "params": { 00:11:14.099 "trtype": "pcie", 00:11:14.099 "traddr": "0000:00:10.0", 00:11:14.099 "name": "Nvme0" 00:11:14.099 }, 00:11:14.099 "method": "bdev_nvme_attach_controller" 00:11:14.099 }, 00:11:14.099 { 00:11:14.099 "method": "bdev_wait_for_examine" 00:11:14.099 } 00:11:14.099 ] 00:11:14.099 } 00:11:14.099 ] 00:11:14.099 } 00:11:14.099 [2024-12-05 10:54:41.226955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.357 [2024-12-05 10:54:41.302370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.357 [2024-12-05 10:54:41.364614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:14.357  [2024-12-05T10:54:41.775Z] Copying: 48/48 [kB] (average 46 MBps) 00:11:14.616 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:14.616 10:54:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:14.616 [2024-12-05 10:54:41.706127] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:14.616 [2024-12-05 10:54:41.706228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59958 ] 00:11:14.616 { 00:11:14.616 "subsystems": [ 00:11:14.616 { 00:11:14.616 "subsystem": "bdev", 00:11:14.616 "config": [ 00:11:14.616 { 00:11:14.616 "params": { 00:11:14.616 "trtype": "pcie", 00:11:14.616 "traddr": "0000:00:10.0", 00:11:14.616 "name": "Nvme0" 00:11:14.616 }, 00:11:14.616 "method": "bdev_nvme_attach_controller" 00:11:14.616 }, 00:11:14.616 { 00:11:14.616 "method": "bdev_wait_for_examine" 00:11:14.616 } 00:11:14.616 ] 00:11:14.616 } 00:11:14.616 ] 00:11:14.616 } 00:11:14.874 [2024-12-05 10:54:41.861342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.874 [2024-12-05 10:54:41.914764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.874 [2024-12-05 10:54:41.958435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:15.131  [2024-12-05T10:54:42.290Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:15.131 00:11:15.131 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:11:15.131 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:11:15.131 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:11:15.131 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:11:15.131 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:11:15.131 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:11:15.131 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:15.697 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:11:15.697 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:11:15.697 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:15.697 10:54:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:15.697 { 00:11:15.697 "subsystems": [ 00:11:15.697 { 00:11:15.697 "subsystem": "bdev", 00:11:15.697 "config": [ 00:11:15.697 { 00:11:15.697 "params": { 00:11:15.697 "trtype": "pcie", 00:11:15.697 "traddr": "0000:00:10.0", 00:11:15.697 "name": "Nvme0" 00:11:15.697 }, 00:11:15.697 "method": "bdev_nvme_attach_controller" 00:11:15.697 }, 00:11:15.697 { 00:11:15.697 "method": "bdev_wait_for_examine" 00:11:15.697 } 00:11:15.697 ] 00:11:15.697 } 00:11:15.697 ] 00:11:15.697 } 00:11:15.697 [2024-12-05 10:54:42.735503] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:15.697 [2024-12-05 10:54:42.735730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59977 ] 00:11:15.956 [2024-12-05 10:54:42.891491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.956 [2024-12-05 10:54:42.938132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.956 [2024-12-05 10:54:42.982099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:15.956  [2024-12-05T10:54:43.373Z] Copying: 48/48 [kB] (average 46 MBps) 00:11:16.214 00:11:16.214 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:11:16.214 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:11:16.214 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:16.214 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:16.214 [2024-12-05 10:54:43.295133] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:16.214 [2024-12-05 10:54:43.295221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59991 ] 00:11:16.214 { 00:11:16.214 "subsystems": [ 00:11:16.214 { 00:11:16.214 "subsystem": "bdev", 00:11:16.214 "config": [ 00:11:16.214 { 00:11:16.214 "params": { 00:11:16.214 "trtype": "pcie", 00:11:16.214 "traddr": "0000:00:10.0", 00:11:16.214 "name": "Nvme0" 00:11:16.214 }, 00:11:16.214 "method": "bdev_nvme_attach_controller" 00:11:16.214 }, 00:11:16.214 { 00:11:16.214 "method": "bdev_wait_for_examine" 00:11:16.214 } 00:11:16.214 ] 00:11:16.214 } 00:11:16.214 ] 00:11:16.214 } 00:11:16.472 [2024-12-05 10:54:43.446661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.472 [2024-12-05 10:54:43.501351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.472 [2024-12-05 10:54:43.543567] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:16.732  [2024-12-05T10:54:43.891Z] Copying: 48/48 [kB] (average 46 MBps) 00:11:16.732 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:16.732 10:54:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:16.732 [2024-12-05 10:54:43.868320] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:16.732 [2024-12-05 10:54:43.868574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60006 ] 00:11:16.732 { 00:11:16.732 "subsystems": [ 00:11:16.732 { 00:11:16.732 "subsystem": "bdev", 00:11:16.732 "config": [ 00:11:16.732 { 00:11:16.732 "params": { 00:11:16.732 "trtype": "pcie", 00:11:16.732 "traddr": "0000:00:10.0", 00:11:16.732 "name": "Nvme0" 00:11:16.732 }, 00:11:16.732 "method": "bdev_nvme_attach_controller" 00:11:16.732 }, 00:11:16.732 { 00:11:16.732 "method": "bdev_wait_for_examine" 00:11:16.732 } 00:11:16.732 ] 00:11:16.732 } 00:11:16.732 ] 00:11:16.732 } 00:11:16.991 [2024-12-05 10:54:44.017739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.991 [2024-12-05 10:54:44.072405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.991 [2024-12-05 10:54:44.114488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.250  [2024-12-05T10:54:44.409Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:17.250 00:11:17.250 ************************************ 00:11:17.250 END TEST dd_rw 00:11:17.250 ************************************ 00:11:17.250 00:11:17.250 real 0m13.098s 00:11:17.250 user 0m9.238s 00:11:17.250 sys 0m4.976s 00:11:17.250 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.250 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:17.509 ************************************ 00:11:17.509 START TEST dd_rw_offset 00:11:17.509 ************************************ 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:11:17.509 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:11:17.510 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=uusobvtbhmveo9yajam9cteaze00wkbt1ur8uqncira34r0ript1svi86oy93ldglyirbmt8ap3shwpz9c021jigg77as3hds7li3rk1k928q6re1h8gu2pcpaamddex9yvjy2fkzylsxdgimwbhwkwfd15ngsr1u5834rzplwhrkd9a7hlzvrtp4sh5lrsq8r85b2lfsdqif8lhubmlp9jeged9pqujjkv5fj9fc3rmjf7ji76pnvum0f6ekwnra3h8sv3honsnlytylx2bfyp8u0r5f9wn7bohek73o0vpb5xs0ac2cgteoj0nbgjx1mnkf4jy04e6nrxrkl3sfv9ln52c5kab9q7glfaps8e7be4fqjwl92z6j5h8m5wey4vf2raq74blwkdmv4d4vca97telypd2q6ylkzckvjwmn7c6edd8qjmdexemdz9xwaemk5o4ty5xhfl1ucjf2mj19g2ay3y6smsrzcavszzu9odviterk0j20g0n2hy52soinkvmk4epkj11qw8y7yigp8bbc52af3kqcec7mp63db8c6pvsmc3fph2c86azvxuq869likunravclds3ztggci6leio5lnk2spjvi42gigniao4864pykz3futy5rmx2rr895pcj4bcs2h5gczt0uknoz45c60tb22rhezy2rkvtpqkgycbiib7yzz4dv9fmg1awbccxrryo9wl6y0guermvfrgloo7jwovpae0tj532fvb4srjkx1tgrmp2t0to8kj1hfcgaef3lft7ixjc3nymwgn9ffznsotseztqufaebe43sint6340xavo1rr2z0cij9ktqkdkuz9gps2le4r8bu28wpbij3aj99m1xduoixidck6rx9bsnllnnfwzjbcs54e8i0ijrcb0dnoznfycs8tb43uzz179ndj1b1mr70mrr1v5tkj8vzmoazy2a8d92g17oh04j3lolw8hk99t3j9zwm7jj4mr5ohv6h8qiim2qfqry64onojx62xgcfxsxzx012y6kj29h3o9uybyt84fkuot3y7h0r7no9sdmpthdjqa2e3j71vcgnw4p9c49uy7ogumbcs5me4hmvxp5ng7ha6ibwzribtm5p1fl1zrjrl8wzo8xsc5tpwg0ti874gl79ifqdv4q7wo0rmuenrixecn6sv4ktma3nc38m9rkgeqfdukx7ls1o1rqdhxl2qck20ipw6bkmaryjtvir5uht84lqaobmlgf2u3bunq149tzmsobmgj1j4dj7nrnivvvokpy0mtbys8z6rgoh437yvvclkfft1ywdpihlau3qu9q51sfthbc2sc0zj4tfnp5fu1gmbvyxelu7cb79dvq4ippekt2wggqyqnewacrqjnqs5gbpq986iqtf4slsof1hmkpvaavq9mjfcz5prilytruibks68uabh74dai1m1n48kxdd1gfr2655qva74vjn8hrzjs3sty1xdfj06znoz1d65yrg6qkxl6b8f5ecpvqtwgnmz40dn8cxxfs1jt8qunqqcho23xen8utcvr0tbz4k78wh9cl7rvq9ewfws9p3v5t9erepv1bunbtbs9tz304cvzj06bp5vsxbt3duubyvlm68jzx2638lu5mznmjos9kalznhzv6tiibgui2421fb6rfh4yux07kjzwzf01azhosnjazhas41h7sh1taqkyldl19zv5a3vlfxyp1ovve31q5zi18d3dh95nxwii3wwqvumod1bm7vo0p60qszlvj7v5s5yf01zmdk11p5jh6s7bf5sbwqbkgiar34o2opk7bw2e6nrovr8dt2f32gqk1ov6y7budmiope7xbw1w3pvnmua909t6ojjp1v87n8r4er2v5x2vta95eq3xdlesjdxpbt0vpm73uo6n4tr6s7kfj3h4o7svvi3u1b53o9xkvc6n1zdzwjs9snxxw1yq8ej4hrlgkpwh2ftndwmx7mywkbwfrl5he5l8frsiyx1b4fwls0oc6zxslcvis7qareiz29gjc9comrk9e10h96u6p3zykvdsv2gpdapsuyf3hzh1d39t4gnmtb40yb24fbnbr1nx1qaoeptsougwaofpk2it85t1pt99dte38mpst4nnxnpew0ntnsbt26xm8whlr6hle52uvkzdd2ldkkrp9jtmgh7ia1hczsbiongqwl20qnok30z4k1tw9x0twfeg5jtosagciscae46hzt3bgaikeywkvpj6ox0uq9ca44h7zvdgwz3ut0qhpnrf7vsw0v27l5ofrpyt24vocfwzyspnaumbeeqt9zx3dglskaj36cxg1mcidjoiswnr7rq6hdr1wrtvq2avn5c5crd23rou9f4cjuajr7iw99auh0r51yv7xzvsahqxewadshhj1m89i070xiuvm07ardjqnxy31a4y575525xf8ulln3kx42nyuch3ae20g2yflyi1c8826vp49cdn1emhivm4sde6oufs9550by634kebvwg2t94spf2ty9lw1b8m2bm6fhhvi12efc91z03pmyalpmhdlnza16qk93fpm8bf6bp37cgqvhhyc5fabioazqrohapbesqhcsjkuz90x5wyvqw7g4fufdccew2gdw2mk2plkxdadlwp1lobwh9d3k14060x25byxk48m4nqx5j6pxlmkm5y3w8w4ps2xa40hsrp4pzpptitcqz8nzxile84kij4i8q8zhw789wyjhftvl7dj7hhgqd7jsrgs1duushizdrak32do740u76rf1h2c27v5i9gswoovigfwxyddar56hrxuiremv52dczj860wtpd5jdmlzmdazmygu8cu4ldstnjiad8z71byypxht3drptxqhdcfyly2j0kpn5y6konglb0e6k9trp9p9us4s6ol0zx8dcl5rkd1q5218jn4b70ej8bzf540fa0026nvs0cyb0sdmeanjgp7v001jwcu449u9swdj2q4bc83x2bo4v0kna3fxtoz7bhlctv0d3lj268cns9gx9c8ug9y3ogtcfhv43gc1x8qj6pss4zjv37u1ufwz92wvzhwl2xe6j2b4jy7ol2atyyx1kzg5gnis33p07c5d19e7aa9wwqy8v50s691jsx9qnp4h9k87t28n1eg83oyiar5a8rk0watghh7cvpo98t7zmie7fq3zpnsl7u5y6jzjy40eutb19zdy2z9g0h4laaf7pdmovcjx8ee3apsejqikqb4up0sglfvjk3xa6fg0k6whybjhijhnz6sgfbalkubh34kmt9pi9whvgbm24dwhteqj0h1xx5x6wcrz88hxnfbtnhok603mlxc19u4tuuotlfleqk61kcl2ccm3uy7wsdkylpqurq1nkociwnbca1ny3ok9byb696ai719leq38mkvroa2duis3mizniov1xtlba0aq8v7jas68cg7tyel7i2ufrrbvj8ektcnuk9rc09b50hvmuhl8rup0tcrvmvn4dcsdo9u60allsndf11ihkxj5paer1hrxb6uwevdy438hdnprvtqw9rrhda6r7z59ummlg7wsarghl5a9sxikdlsplt0w4q5in7cbgqfehtofmct2jrjp0u4sca8gvbn5hbgltn1f8y3qisjow7nimqpq17o85zah39234oc5klorlcwy605e0f6thbxtqfc2wfxffeo0sn8k0deo1a6crdddwf218si1v3hfmsyhql8d4w0nqfc6d0k63j9vc86hw5b9d9g2qhweidntj7k7yko68s0sc5fuh36uv866lmysw1gguxc4nb2n93ztcoznvqn8qfgsksfuy6k8ndr850f22gij5zxjijdq8hmtalcifl6qol9353ss4qh5j57n56pcad9749qgrl9nvxj1ee4320wbjnm9xiizyga7iutvmkdr7biwxqiy5do91ygzvxd8upyfkq8ppgkfhnbb39rdisd0f5sds3mpr5pdkyoe8a60s7fg8n27br8ihagckiq4glfiq1mj1oio38hjq6t6ui35o3g2rj8lsfa5qot 00:11:17.510 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:11:17.510 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:11:17.510 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:11:17.510 10:54:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:11:17.510 [2024-12-05 10:54:44.569699] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:17.510 [2024-12-05 10:54:44.569774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60042 ] 00:11:17.510 { 00:11:17.510 "subsystems": [ 00:11:17.510 { 00:11:17.510 "subsystem": "bdev", 00:11:17.510 "config": [ 00:11:17.510 { 00:11:17.510 "params": { 00:11:17.510 "trtype": "pcie", 00:11:17.510 "traddr": "0000:00:10.0", 00:11:17.510 "name": "Nvme0" 00:11:17.510 }, 00:11:17.510 "method": "bdev_nvme_attach_controller" 00:11:17.510 }, 00:11:17.510 { 00:11:17.510 "method": "bdev_wait_for_examine" 00:11:17.510 } 00:11:17.510 ] 00:11:17.510 } 00:11:17.510 ] 00:11:17.510 } 00:11:17.769 [2024-12-05 10:54:44.726394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.769 [2024-12-05 10:54:44.772392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.769 [2024-12-05 10:54:44.814003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:17.769  [2024-12-05T10:54:45.187Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:11:18.028 00:11:18.028 10:54:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:11:18.028 10:54:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:11:18.028 10:54:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:11:18.028 10:54:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:11:18.028 { 00:11:18.028 "subsystems": [ 00:11:18.028 { 00:11:18.028 "subsystem": "bdev", 00:11:18.028 "config": [ 00:11:18.028 { 00:11:18.028 "params": { 00:11:18.028 "trtype": "pcie", 00:11:18.028 "traddr": "0000:00:10.0", 00:11:18.028 "name": "Nvme0" 00:11:18.028 }, 00:11:18.028 "method": "bdev_nvme_attach_controller" 00:11:18.028 }, 00:11:18.028 { 00:11:18.028 "method": "bdev_wait_for_examine" 00:11:18.028 } 00:11:18.028 ] 00:11:18.028 } 00:11:18.028 ] 00:11:18.028 } 00:11:18.028 [2024-12-05 10:54:45.137638] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:18.028 [2024-12-05 10:54:45.137718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:11:18.290 [2024-12-05 10:54:45.286576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.290 [2024-12-05 10:54:45.340553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.290 [2024-12-05 10:54:45.382796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:18.548  [2024-12-05T10:54:45.707Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:11:18.548 00:11:18.548 10:54:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:11:18.549 10:54:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ uusobvtbhmveo9yajam9cteaze00wkbt1ur8uqncira34r0ript1svi86oy93ldglyirbmt8ap3shwpz9c021jigg77as3hds7li3rk1k928q6re1h8gu2pcpaamddex9yvjy2fkzylsxdgimwbhwkwfd15ngsr1u5834rzplwhrkd9a7hlzvrtp4sh5lrsq8r85b2lfsdqif8lhubmlp9jeged9pqujjkv5fj9fc3rmjf7ji76pnvum0f6ekwnra3h8sv3honsnlytylx2bfyp8u0r5f9wn7bohek73o0vpb5xs0ac2cgteoj0nbgjx1mnkf4jy04e6nrxrkl3sfv9ln52c5kab9q7glfaps8e7be4fqjwl92z6j5h8m5wey4vf2raq74blwkdmv4d4vca97telypd2q6ylkzckvjwmn7c6edd8qjmdexemdz9xwaemk5o4ty5xhfl1ucjf2mj19g2ay3y6smsrzcavszzu9odviterk0j20g0n2hy52soinkvmk4epkj11qw8y7yigp8bbc52af3kqcec7mp63db8c6pvsmc3fph2c86azvxuq869likunravclds3ztggci6leio5lnk2spjvi42gigniao4864pykz3futy5rmx2rr895pcj4bcs2h5gczt0uknoz45c60tb22rhezy2rkvtpqkgycbiib7yzz4dv9fmg1awbccxrryo9wl6y0guermvfrgloo7jwovpae0tj532fvb4srjkx1tgrmp2t0to8kj1hfcgaef3lft7ixjc3nymwgn9ffznsotseztqufaebe43sint6340xavo1rr2z0cij9ktqkdkuz9gps2le4r8bu28wpbij3aj99m1xduoixidck6rx9bsnllnnfwzjbcs54e8i0ijrcb0dnoznfycs8tb43uzz179ndj1b1mr70mrr1v5tkj8vzmoazy2a8d92g17oh04j3lolw8hk99t3j9zwm7jj4mr5ohv6h8qiim2qfqry64onojx62xgcfxsxzx012y6kj29h3o9uybyt84fkuot3y7h0r7no9sdmpthdjqa2e3j71vcgnw4p9c49uy7ogumbcs5me4hmvxp5ng7ha6ibwzribtm5p1fl1zrjrl8wzo8xsc5tpwg0ti874gl79ifqdv4q7wo0rmuenrixecn6sv4ktma3nc38m9rkgeqfdukx7ls1o1rqdhxl2qck20ipw6bkmaryjtvir5uht84lqaobmlgf2u3bunq149tzmsobmgj1j4dj7nrnivvvokpy0mtbys8z6rgoh437yvvclkfft1ywdpihlau3qu9q51sfthbc2sc0zj4tfnp5fu1gmbvyxelu7cb79dvq4ippekt2wggqyqnewacrqjnqs5gbpq986iqtf4slsof1hmkpvaavq9mjfcz5prilytruibks68uabh74dai1m1n48kxdd1gfr2655qva74vjn8hrzjs3sty1xdfj06znoz1d65yrg6qkxl6b8f5ecpvqtwgnmz40dn8cxxfs1jt8qunqqcho23xen8utcvr0tbz4k78wh9cl7rvq9ewfws9p3v5t9erepv1bunbtbs9tz304cvzj06bp5vsxbt3duubyvlm68jzx2638lu5mznmjos9kalznhzv6tiibgui2421fb6rfh4yux07kjzwzf01azhosnjazhas41h7sh1taqkyldl19zv5a3vlfxyp1ovve31q5zi18d3dh95nxwii3wwqvumod1bm7vo0p60qszlvj7v5s5yf01zmdk11p5jh6s7bf5sbwqbkgiar34o2opk7bw2e6nrovr8dt2f32gqk1ov6y7budmiope7xbw1w3pvnmua909t6ojjp1v87n8r4er2v5x2vta95eq3xdlesjdxpbt0vpm73uo6n4tr6s7kfj3h4o7svvi3u1b53o9xkvc6n1zdzwjs9snxxw1yq8ej4hrlgkpwh2ftndwmx7mywkbwfrl5he5l8frsiyx1b4fwls0oc6zxslcvis7qareiz29gjc9comrk9e10h96u6p3zykvdsv2gpdapsuyf3hzh1d39t4gnmtb40yb24fbnbr1nx1qaoeptsougwaofpk2it85t1pt99dte38mpst4nnxnpew0ntnsbt26xm8whlr6hle52uvkzdd2ldkkrp9jtmgh7ia1hczsbiongqwl20qnok30z4k1tw9x0twfeg5jtosagciscae46hzt3bgaikeywkvpj6ox0uq9ca44h7zvdgwz3ut0qhpnrf7vsw0v27l5ofrpyt24vocfwzyspnaumbeeqt9zx3dglskaj36cxg1mcidjoiswnr7rq6hdr1wrtvq2avn5c5crd23rou9f4cjuajr7iw99auh0r51yv7xzvsahqxewadshhj1m89i070xiuvm07ardjqnxy31a4y575525xf8ulln3kx42nyuch3ae20g2yflyi1c8826vp49cdn1emhivm4sde6oufs9550by634kebvwg2t94spf2ty9lw1b8m2bm6fhhvi12efc91z03pmyalpmhdlnza16qk93fpm8bf6bp37cgqvhhyc5fabioazqrohapbesqhcsjkuz90x5wyvqw7g4fufdccew2gdw2mk2plkxdadlwp1lobwh9d3k14060x25byxk48m4nqx5j6pxlmkm5y3w8w4ps2xa40hsrp4pzpptitcqz8nzxile84kij4i8q8zhw789wyjhftvl7dj7hhgqd7jsrgs1duushizdrak32do740u76rf1h2c27v5i9gswoovigfwxyddar56hrxuiremv52dczj860wtpd5jdmlzmdazmygu8cu4ldstnjiad8z71byypxht3drptxqhdcfyly2j0kpn5y6konglb0e6k9trp9p9us4s6ol0zx8dcl5rkd1q5218jn4b70ej8bzf540fa0026nvs0cyb0sdmeanjgp7v001jwcu449u9swdj2q4bc83x2bo4v0kna3fxtoz7bhlctv0d3lj268cns9gx9c8ug9y3ogtcfhv43gc1x8qj6pss4zjv37u1ufwz92wvzhwl2xe6j2b4jy7ol2atyyx1kzg5gnis33p07c5d19e7aa9wwqy8v50s691jsx9qnp4h9k87t28n1eg83oyiar5a8rk0watghh7cvpo98t7zmie7fq3zpnsl7u5y6jzjy40eutb19zdy2z9g0h4laaf7pdmovcjx8ee3apsejqikqb4up0sglfvjk3xa6fg0k6whybjhijhnz6sgfbalkubh34kmt9pi9whvgbm24dwhteqj0h1xx5x6wcrz88hxnfbtnhok603mlxc19u4tuuotlfleqk61kcl2ccm3uy7wsdkylpqurq1nkociwnbca1ny3ok9byb696ai719leq38mkvroa2duis3mizniov1xtlba0aq8v7jas68cg7tyel7i2ufrrbvj8ektcnuk9rc09b50hvmuhl8rup0tcrvmvn4dcsdo9u60allsndf11ihkxj5paer1hrxb6uwevdy438hdnprvtqw9rrhda6r7z59ummlg7wsarghl5a9sxikdlsplt0w4q5in7cbgqfehtofmct2jrjp0u4sca8gvbn5hbgltn1f8y3qisjow7nimqpq17o85zah39234oc5klorlcwy605e0f6thbxtqfc2wfxffeo0sn8k0deo1a6crdddwf218si1v3hfmsyhql8d4w0nqfc6d0k63j9vc86hw5b9d9g2qhweidntj7k7yko68s0sc5fuh36uv866lmysw1gguxc4nb2n93ztcoznvqn8qfgsksfuy6k8ndr850f22gij5zxjijdq8hmtalcifl6qol9353ss4qh5j57n56pcad9749qgrl9nvxj1ee4320wbjnm9xiizyga7iutvmkdr7biwxqiy5do91ygzvxd8upyfkq8ppgkfhnbb39rdisd0f5sds3mpr5pdkyoe8a60s7fg8n27br8ihagckiq4glfiq1mj1oio38hjq6t6ui35o3g2rj8lsfa5qot == \u\u\s\o\b\v\t\b\h\m\v\e\o\9\y\a\j\a\m\9\c\t\e\a\z\e\0\0\w\k\b\t\1\u\r\8\u\q\n\c\i\r\a\3\4\r\0\r\i\p\t\1\s\v\i\8\6\o\y\9\3\l\d\g\l\y\i\r\b\m\t\8\a\p\3\s\h\w\p\z\9\c\0\2\1\j\i\g\g\7\7\a\s\3\h\d\s\7\l\i\3\r\k\1\k\9\2\8\q\6\r\e\1\h\8\g\u\2\p\c\p\a\a\m\d\d\e\x\9\y\v\j\y\2\f\k\z\y\l\s\x\d\g\i\m\w\b\h\w\k\w\f\d\1\5\n\g\s\r\1\u\5\8\3\4\r\z\p\l\w\h\r\k\d\9\a\7\h\l\z\v\r\t\p\4\s\h\5\l\r\s\q\8\r\8\5\b\2\l\f\s\d\q\i\f\8\l\h\u\b\m\l\p\9\j\e\g\e\d\9\p\q\u\j\j\k\v\5\f\j\9\f\c\3\r\m\j\f\7\j\i\7\6\p\n\v\u\m\0\f\6\e\k\w\n\r\a\3\h\8\s\v\3\h\o\n\s\n\l\y\t\y\l\x\2\b\f\y\p\8\u\0\r\5\f\9\w\n\7\b\o\h\e\k\7\3\o\0\v\p\b\5\x\s\0\a\c\2\c\g\t\e\o\j\0\n\b\g\j\x\1\m\n\k\f\4\j\y\0\4\e\6\n\r\x\r\k\l\3\s\f\v\9\l\n\5\2\c\5\k\a\b\9\q\7\g\l\f\a\p\s\8\e\7\b\e\4\f\q\j\w\l\9\2\z\6\j\5\h\8\m\5\w\e\y\4\v\f\2\r\a\q\7\4\b\l\w\k\d\m\v\4\d\4\v\c\a\9\7\t\e\l\y\p\d\2\q\6\y\l\k\z\c\k\v\j\w\m\n\7\c\6\e\d\d\8\q\j\m\d\e\x\e\m\d\z\9\x\w\a\e\m\k\5\o\4\t\y\5\x\h\f\l\1\u\c\j\f\2\m\j\1\9\g\2\a\y\3\y\6\s\m\s\r\z\c\a\v\s\z\z\u\9\o\d\v\i\t\e\r\k\0\j\2\0\g\0\n\2\h\y\5\2\s\o\i\n\k\v\m\k\4\e\p\k\j\1\1\q\w\8\y\7\y\i\g\p\8\b\b\c\5\2\a\f\3\k\q\c\e\c\7\m\p\6\3\d\b\8\c\6\p\v\s\m\c\3\f\p\h\2\c\8\6\a\z\v\x\u\q\8\6\9\l\i\k\u\n\r\a\v\c\l\d\s\3\z\t\g\g\c\i\6\l\e\i\o\5\l\n\k\2\s\p\j\v\i\4\2\g\i\g\n\i\a\o\4\8\6\4\p\y\k\z\3\f\u\t\y\5\r\m\x\2\r\r\8\9\5\p\c\j\4\b\c\s\2\h\5\g\c\z\t\0\u\k\n\o\z\4\5\c\6\0\t\b\2\2\r\h\e\z\y\2\r\k\v\t\p\q\k\g\y\c\b\i\i\b\7\y\z\z\4\d\v\9\f\m\g\1\a\w\b\c\c\x\r\r\y\o\9\w\l\6\y\0\g\u\e\r\m\v\f\r\g\l\o\o\7\j\w\o\v\p\a\e\0\t\j\5\3\2\f\v\b\4\s\r\j\k\x\1\t\g\r\m\p\2\t\0\t\o\8\k\j\1\h\f\c\g\a\e\f\3\l\f\t\7\i\x\j\c\3\n\y\m\w\g\n\9\f\f\z\n\s\o\t\s\e\z\t\q\u\f\a\e\b\e\4\3\s\i\n\t\6\3\4\0\x\a\v\o\1\r\r\2\z\0\c\i\j\9\k\t\q\k\d\k\u\z\9\g\p\s\2\l\e\4\r\8\b\u\2\8\w\p\b\i\j\3\a\j\9\9\m\1\x\d\u\o\i\x\i\d\c\k\6\r\x\9\b\s\n\l\l\n\n\f\w\z\j\b\c\s\5\4\e\8\i\0\i\j\r\c\b\0\d\n\o\z\n\f\y\c\s\8\t\b\4\3\u\z\z\1\7\9\n\d\j\1\b\1\m\r\7\0\m\r\r\1\v\5\t\k\j\8\v\z\m\o\a\z\y\2\a\8\d\9\2\g\1\7\o\h\0\4\j\3\l\o\l\w\8\h\k\9\9\t\3\j\9\z\w\m\7\j\j\4\m\r\5\o\h\v\6\h\8\q\i\i\m\2\q\f\q\r\y\6\4\o\n\o\j\x\6\2\x\g\c\f\x\s\x\z\x\0\1\2\y\6\k\j\2\9\h\3\o\9\u\y\b\y\t\8\4\f\k\u\o\t\3\y\7\h\0\r\7\n\o\9\s\d\m\p\t\h\d\j\q\a\2\e\3\j\7\1\v\c\g\n\w\4\p\9\c\4\9\u\y\7\o\g\u\m\b\c\s\5\m\e\4\h\m\v\x\p\5\n\g\7\h\a\6\i\b\w\z\r\i\b\t\m\5\p\1\f\l\1\z\r\j\r\l\8\w\z\o\8\x\s\c\5\t\p\w\g\0\t\i\8\7\4\g\l\7\9\i\f\q\d\v\4\q\7\w\o\0\r\m\u\e\n\r\i\x\e\c\n\6\s\v\4\k\t\m\a\3\n\c\3\8\m\9\r\k\g\e\q\f\d\u\k\x\7\l\s\1\o\1\r\q\d\h\x\l\2\q\c\k\2\0\i\p\w\6\b\k\m\a\r\y\j\t\v\i\r\5\u\h\t\8\4\l\q\a\o\b\m\l\g\f\2\u\3\b\u\n\q\1\4\9\t\z\m\s\o\b\m\g\j\1\j\4\d\j\7\n\r\n\i\v\v\v\o\k\p\y\0\m\t\b\y\s\8\z\6\r\g\o\h\4\3\7\y\v\v\c\l\k\f\f\t\1\y\w\d\p\i\h\l\a\u\3\q\u\9\q\5\1\s\f\t\h\b\c\2\s\c\0\z\j\4\t\f\n\p\5\f\u\1\g\m\b\v\y\x\e\l\u\7\c\b\7\9\d\v\q\4\i\p\p\e\k\t\2\w\g\g\q\y\q\n\e\w\a\c\r\q\j\n\q\s\5\g\b\p\q\9\8\6\i\q\t\f\4\s\l\s\o\f\1\h\m\k\p\v\a\a\v\q\9\m\j\f\c\z\5\p\r\i\l\y\t\r\u\i\b\k\s\6\8\u\a\b\h\7\4\d\a\i\1\m\1\n\4\8\k\x\d\d\1\g\f\r\2\6\5\5\q\v\a\7\4\v\j\n\8\h\r\z\j\s\3\s\t\y\1\x\d\f\j\0\6\z\n\o\z\1\d\6\5\y\r\g\6\q\k\x\l\6\b\8\f\5\e\c\p\v\q\t\w\g\n\m\z\4\0\d\n\8\c\x\x\f\s\1\j\t\8\q\u\n\q\q\c\h\o\2\3\x\e\n\8\u\t\c\v\r\0\t\b\z\4\k\7\8\w\h\9\c\l\7\r\v\q\9\e\w\f\w\s\9\p\3\v\5\t\9\e\r\e\p\v\1\b\u\n\b\t\b\s\9\t\z\3\0\4\c\v\z\j\0\6\b\p\5\v\s\x\b\t\3\d\u\u\b\y\v\l\m\6\8\j\z\x\2\6\3\8\l\u\5\m\z\n\m\j\o\s\9\k\a\l\z\n\h\z\v\6\t\i\i\b\g\u\i\2\4\2\1\f\b\6\r\f\h\4\y\u\x\0\7\k\j\z\w\z\f\0\1\a\z\h\o\s\n\j\a\z\h\a\s\4\1\h\7\s\h\1\t\a\q\k\y\l\d\l\1\9\z\v\5\a\3\v\l\f\x\y\p\1\o\v\v\e\3\1\q\5\z\i\1\8\d\3\d\h\9\5\n\x\w\i\i\3\w\w\q\v\u\m\o\d\1\b\m\7\v\o\0\p\6\0\q\s\z\l\v\j\7\v\5\s\5\y\f\0\1\z\m\d\k\1\1\p\5\j\h\6\s\7\b\f\5\s\b\w\q\b\k\g\i\a\r\3\4\o\2\o\p\k\7\b\w\2\e\6\n\r\o\v\r\8\d\t\2\f\3\2\g\q\k\1\o\v\6\y\7\b\u\d\m\i\o\p\e\7\x\b\w\1\w\3\p\v\n\m\u\a\9\0\9\t\6\o\j\j\p\1\v\8\7\n\8\r\4\e\r\2\v\5\x\2\v\t\a\9\5\e\q\3\x\d\l\e\s\j\d\x\p\b\t\0\v\p\m\7\3\u\o\6\n\4\t\r\6\s\7\k\f\j\3\h\4\o\7\s\v\v\i\3\u\1\b\5\3\o\9\x\k\v\c\6\n\1\z\d\z\w\j\s\9\s\n\x\x\w\1\y\q\8\e\j\4\h\r\l\g\k\p\w\h\2\f\t\n\d\w\m\x\7\m\y\w\k\b\w\f\r\l\5\h\e\5\l\8\f\r\s\i\y\x\1\b\4\f\w\l\s\0\o\c\6\z\x\s\l\c\v\i\s\7\q\a\r\e\i\z\2\9\g\j\c\9\c\o\m\r\k\9\e\1\0\h\9\6\u\6\p\3\z\y\k\v\d\s\v\2\g\p\d\a\p\s\u\y\f\3\h\z\h\1\d\3\9\t\4\g\n\m\t\b\4\0\y\b\2\4\f\b\n\b\r\1\n\x\1\q\a\o\e\p\t\s\o\u\g\w\a\o\f\p\k\2\i\t\8\5\t\1\p\t\9\9\d\t\e\3\8\m\p\s\t\4\n\n\x\n\p\e\w\0\n\t\n\s\b\t\2\6\x\m\8\w\h\l\r\6\h\l\e\5\2\u\v\k\z\d\d\2\l\d\k\k\r\p\9\j\t\m\g\h\7\i\a\1\h\c\z\s\b\i\o\n\g\q\w\l\2\0\q\n\o\k\3\0\z\4\k\1\t\w\9\x\0\t\w\f\e\g\5\j\t\o\s\a\g\c\i\s\c\a\e\4\6\h\z\t\3\b\g\a\i\k\e\y\w\k\v\p\j\6\o\x\0\u\q\9\c\a\4\4\h\7\z\v\d\g\w\z\3\u\t\0\q\h\p\n\r\f\7\v\s\w\0\v\2\7\l\5\o\f\r\p\y\t\2\4\v\o\c\f\w\z\y\s\p\n\a\u\m\b\e\e\q\t\9\z\x\3\d\g\l\s\k\a\j\3\6\c\x\g\1\m\c\i\d\j\o\i\s\w\n\r\7\r\q\6\h\d\r\1\w\r\t\v\q\2\a\v\n\5\c\5\c\r\d\2\3\r\o\u\9\f\4\c\j\u\a\j\r\7\i\w\9\9\a\u\h\0\r\5\1\y\v\7\x\z\v\s\a\h\q\x\e\w\a\d\s\h\h\j\1\m\8\9\i\0\7\0\x\i\u\v\m\0\7\a\r\d\j\q\n\x\y\3\1\a\4\y\5\7\5\5\2\5\x\f\8\u\l\l\n\3\k\x\4\2\n\y\u\c\h\3\a\e\2\0\g\2\y\f\l\y\i\1\c\8\8\2\6\v\p\4\9\c\d\n\1\e\m\h\i\v\m\4\s\d\e\6\o\u\f\s\9\5\5\0\b\y\6\3\4\k\e\b\v\w\g\2\t\9\4\s\p\f\2\t\y\9\l\w\1\b\8\m\2\b\m\6\f\h\h\v\i\1\2\e\f\c\9\1\z\0\3\p\m\y\a\l\p\m\h\d\l\n\z\a\1\6\q\k\9\3\f\p\m\8\b\f\6\b\p\3\7\c\g\q\v\h\h\y\c\5\f\a\b\i\o\a\z\q\r\o\h\a\p\b\e\s\q\h\c\s\j\k\u\z\9\0\x\5\w\y\v\q\w\7\g\4\f\u\f\d\c\c\e\w\2\g\d\w\2\m\k\2\p\l\k\x\d\a\d\l\w\p\1\l\o\b\w\h\9\d\3\k\1\4\0\6\0\x\2\5\b\y\x\k\4\8\m\4\n\q\x\5\j\6\p\x\l\m\k\m\5\y\3\w\8\w\4\p\s\2\x\a\4\0\h\s\r\p\4\p\z\p\p\t\i\t\c\q\z\8\n\z\x\i\l\e\8\4\k\i\j\4\i\8\q\8\z\h\w\7\8\9\w\y\j\h\f\t\v\l\7\d\j\7\h\h\g\q\d\7\j\s\r\g\s\1\d\u\u\s\h\i\z\d\r\a\k\3\2\d\o\7\4\0\u\7\6\r\f\1\h\2\c\2\7\v\5\i\9\g\s\w\o\o\v\i\g\f\w\x\y\d\d\a\r\5\6\h\r\x\u\i\r\e\m\v\5\2\d\c\z\j\8\6\0\w\t\p\d\5\j\d\m\l\z\m\d\a\z\m\y\g\u\8\c\u\4\l\d\s\t\n\j\i\a\d\8\z\7\1\b\y\y\p\x\h\t\3\d\r\p\t\x\q\h\d\c\f\y\l\y\2\j\0\k\p\n\5\y\6\k\o\n\g\l\b\0\e\6\k\9\t\r\p\9\p\9\u\s\4\s\6\o\l\0\z\x\8\d\c\l\5\r\k\d\1\q\5\2\1\8\j\n\4\b\7\0\e\j\8\b\z\f\5\4\0\f\a\0\0\2\6\n\v\s\0\c\y\b\0\s\d\m\e\a\n\j\g\p\7\v\0\0\1\j\w\c\u\4\4\9\u\9\s\w\d\j\2\q\4\b\c\8\3\x\2\b\o\4\v\0\k\n\a\3\f\x\t\o\z\7\b\h\l\c\t\v\0\d\3\l\j\2\6\8\c\n\s\9\g\x\9\c\8\u\g\9\y\3\o\g\t\c\f\h\v\4\3\g\c\1\x\8\q\j\6\p\s\s\4\z\j\v\3\7\u\1\u\f\w\z\9\2\w\v\z\h\w\l\2\x\e\6\j\2\b\4\j\y\7\o\l\2\a\t\y\y\x\1\k\z\g\5\g\n\i\s\3\3\p\0\7\c\5\d\1\9\e\7\a\a\9\w\w\q\y\8\v\5\0\s\6\9\1\j\s\x\9\q\n\p\4\h\9\k\8\7\t\2\8\n\1\e\g\8\3\o\y\i\a\r\5\a\8\r\k\0\w\a\t\g\h\h\7\c\v\p\o\9\8\t\7\z\m\i\e\7\f\q\3\z\p\n\s\l\7\u\5\y\6\j\z\j\y\4\0\e\u\t\b\1\9\z\d\y\2\z\9\g\0\h\4\l\a\a\f\7\p\d\m\o\v\c\j\x\8\e\e\3\a\p\s\e\j\q\i\k\q\b\4\u\p\0\s\g\l\f\v\j\k\3\x\a\6\f\g\0\k\6\w\h\y\b\j\h\i\j\h\n\z\6\s\g\f\b\a\l\k\u\b\h\3\4\k\m\t\9\p\i\9\w\h\v\g\b\m\2\4\d\w\h\t\e\q\j\0\h\1\x\x\5\x\6\w\c\r\z\8\8\h\x\n\f\b\t\n\h\o\k\6\0\3\m\l\x\c\1\9\u\4\t\u\u\o\t\l\f\l\e\q\k\6\1\k\c\l\2\c\c\m\3\u\y\7\w\s\d\k\y\l\p\q\u\r\q\1\n\k\o\c\i\w\n\b\c\a\1\n\y\3\o\k\9\b\y\b\6\9\6\a\i\7\1\9\l\e\q\3\8\m\k\v\r\o\a\2\d\u\i\s\3\m\i\z\n\i\o\v\1\x\t\l\b\a\0\a\q\8\v\7\j\a\s\6\8\c\g\7\t\y\e\l\7\i\2\u\f\r\r\b\v\j\8\e\k\t\c\n\u\k\9\r\c\0\9\b\5\0\h\v\m\u\h\l\8\r\u\p\0\t\c\r\v\m\v\n\4\d\c\s\d\o\9\u\6\0\a\l\l\s\n\d\f\1\1\i\h\k\x\j\5\p\a\e\r\1\h\r\x\b\6\u\w\e\v\d\y\4\3\8\h\d\n\p\r\v\t\q\w\9\r\r\h\d\a\6\r\7\z\5\9\u\m\m\l\g\7\w\s\a\r\g\h\l\5\a\9\s\x\i\k\d\l\s\p\l\t\0\w\4\q\5\i\n\7\c\b\g\q\f\e\h\t\o\f\m\c\t\2\j\r\j\p\0\u\4\s\c\a\8\g\v\b\n\5\h\b\g\l\t\n\1\f\8\y\3\q\i\s\j\o\w\7\n\i\m\q\p\q\1\7\o\8\5\z\a\h\3\9\2\3\4\o\c\5\k\l\o\r\l\c\w\y\6\0\5\e\0\f\6\t\h\b\x\t\q\f\c\2\w\f\x\f\f\e\o\0\s\n\8\k\0\d\e\o\1\a\6\c\r\d\d\d\w\f\2\1\8\s\i\1\v\3\h\f\m\s\y\h\q\l\8\d\4\w\0\n\q\f\c\6\d\0\k\6\3\j\9\v\c\8\6\h\w\5\b\9\d\9\g\2\q\h\w\e\i\d\n\t\j\7\k\7\y\k\o\6\8\s\0\s\c\5\f\u\h\3\6\u\v\8\6\6\l\m\y\s\w\1\g\g\u\x\c\4\n\b\2\n\9\3\z\t\c\o\z\n\v\q\n\8\q\f\g\s\k\s\f\u\y\6\k\8\n\d\r\8\5\0\f\2\2\g\i\j\5\z\x\j\i\j\d\q\8\h\m\t\a\l\c\i\f\l\6\q\o\l\9\3\5\3\s\s\4\q\h\5\j\5\7\n\5\6\p\c\a\d\9\7\4\9\q\g\r\l\9\n\v\x\j\1\e\e\4\3\2\0\w\b\j\n\m\9\x\i\i\z\y\g\a\7\i\u\t\v\m\k\d\r\7\b\i\w\x\q\i\y\5\d\o\9\1\y\g\z\v\x\d\8\u\p\y\f\k\q\8\p\p\g\k\f\h\n\b\b\3\9\r\d\i\s\d\0\f\5\s\d\s\3\m\p\r\5\p\d\k\y\o\e\8\a\6\0\s\7\f\g\8\n\2\7\b\r\8\i\h\a\g\c\k\i\q\4\g\l\f\i\q\1\m\j\1\o\i\o\3\8\h\j\q\6\t\6\u\i\3\5\o\3\g\2\r\j\8\l\s\f\a\5\q\o\t ]] 00:11:18.549 00:11:18.549 real 0m1.197s 00:11:18.549 user 0m0.800s 00:11:18.549 sys 0m0.547s 00:11:18.549 10:54:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.549 ************************************ 00:11:18.549 END TEST dd_rw_offset 00:11:18.549 ************************************ 00:11:18.549 10:54:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:11:18.808 10:54:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:18.808 [2024-12-05 10:54:45.780845] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:18.808 [2024-12-05 10:54:45.780923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60085 ] 00:11:18.808 { 00:11:18.808 "subsystems": [ 00:11:18.808 { 00:11:18.808 "subsystem": "bdev", 00:11:18.808 "config": [ 00:11:18.808 { 00:11:18.808 "params": { 00:11:18.808 "trtype": "pcie", 00:11:18.808 "traddr": "0000:00:10.0", 00:11:18.808 "name": "Nvme0" 00:11:18.808 }, 00:11:18.808 "method": "bdev_nvme_attach_controller" 00:11:18.808 }, 00:11:18.808 { 00:11:18.808 "method": "bdev_wait_for_examine" 00:11:18.808 } 00:11:18.808 ] 00:11:18.808 } 00:11:18.808 ] 00:11:18.808 } 00:11:18.808 [2024-12-05 10:54:45.932150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.067 [2024-12-05 10:54:45.986682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.067 [2024-12-05 10:54:46.029597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:19.067  [2024-12-05T10:54:46.485Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:11:19.326 00:11:19.326 10:54:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:19.326 ************************************ 00:11:19.326 END TEST spdk_dd_basic_rw 00:11:19.326 ************************************ 00:11:19.326 00:11:19.326 real 0m16.143s 00:11:19.326 user 0m11.072s 00:11:19.326 sys 0m6.262s 00:11:19.326 10:54:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.326 10:54:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:11:19.327 10:54:46 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:11:19.327 10:54:46 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.327 10:54:46 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.327 10:54:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:19.327 ************************************ 00:11:19.327 START TEST spdk_dd_posix 00:11:19.327 ************************************ 00:11:19.327 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:11:19.586 * Looking for test storage... 00:11:19.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:11:19.586 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.587 --rc genhtml_branch_coverage=1 00:11:19.587 --rc genhtml_function_coverage=1 00:11:19.587 --rc genhtml_legend=1 00:11:19.587 --rc geninfo_all_blocks=1 00:11:19.587 --rc geninfo_unexecuted_blocks=1 00:11:19.587 00:11:19.587 ' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.587 --rc genhtml_branch_coverage=1 00:11:19.587 --rc genhtml_function_coverage=1 00:11:19.587 --rc genhtml_legend=1 00:11:19.587 --rc geninfo_all_blocks=1 00:11:19.587 --rc geninfo_unexecuted_blocks=1 00:11:19.587 00:11:19.587 ' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.587 --rc genhtml_branch_coverage=1 00:11:19.587 --rc genhtml_function_coverage=1 00:11:19.587 --rc genhtml_legend=1 00:11:19.587 --rc geninfo_all_blocks=1 00:11:19.587 --rc geninfo_unexecuted_blocks=1 00:11:19.587 00:11:19.587 ' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.587 --rc genhtml_branch_coverage=1 00:11:19.587 --rc genhtml_function_coverage=1 00:11:19.587 --rc genhtml_legend=1 00:11:19.587 --rc geninfo_all_blocks=1 00:11:19.587 --rc geninfo_unexecuted_blocks=1 00:11:19.587 00:11:19.587 ' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:11:19.587 * First test run, liburing in use 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:19.587 ************************************ 00:11:19.587 START TEST dd_flag_append 00:11:19.587 ************************************ 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=066g832qvfqfwgv94jxj1nkgvljz6dfv 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ob3v80qh19762gbexk2ataeen7hp5yz4 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 066g832qvfqfwgv94jxj1nkgvljz6dfv 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ob3v80qh19762gbexk2ataeen7hp5yz4 00:11:19.587 10:54:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:19.587 [2024-12-05 10:54:46.703710] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:19.587 [2024-12-05 10:54:46.703980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60157 ] 00:11:19.846 [2024-12-05 10:54:46.852778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.846 [2024-12-05 10:54:46.908453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.846 [2024-12-05 10:54:46.950545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:19.846  [2024-12-05T10:54:47.264Z] Copying: 32/32 [B] (average 31 kBps) 00:11:20.105 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ob3v80qh19762gbexk2ataeen7hp5yz4066g832qvfqfwgv94jxj1nkgvljz6dfv == \o\b\3\v\8\0\q\h\1\9\7\6\2\g\b\e\x\k\2\a\t\a\e\e\n\7\h\p\5\y\z\4\0\6\6\g\8\3\2\q\v\f\q\f\w\g\v\9\4\j\x\j\1\n\k\g\v\l\j\z\6\d\f\v ]] 00:11:20.105 00:11:20.105 real 0m0.508s 00:11:20.105 user 0m0.257s 00:11:20.105 sys 0m0.251s 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:11:20.105 ************************************ 00:11:20.105 END TEST dd_flag_append 00:11:20.105 ************************************ 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:20.105 ************************************ 00:11:20.105 START TEST dd_flag_directory 00:11:20.105 ************************************ 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:20.105 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:20.364 [2024-12-05 10:54:47.280865] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:20.364 [2024-12-05 10:54:47.280962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60186 ] 00:11:20.364 [2024-12-05 10:54:47.431969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.364 [2024-12-05 10:54:47.486823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.639 [2024-12-05 10:54:47.528850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:20.639 [2024-12-05 10:54:47.561645] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:20.639 [2024-12-05 10:54:47.561697] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:20.639 [2024-12-05 10:54:47.561716] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:20.639 [2024-12-05 10:54:47.660360] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:20.639 10:54:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:20.639 [2024-12-05 10:54:47.783178] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:20.639 [2024-12-05 10:54:47.783443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60195 ] 00:11:20.899 [2024-12-05 10:54:47.938953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.899 [2024-12-05 10:54:47.992852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.899 [2024-12-05 10:54:48.034539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:21.159 [2024-12-05 10:54:48.066396] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:21.159 [2024-12-05 10:54:48.066659] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:21.159 [2024-12-05 10:54:48.066689] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:21.159 [2024-12-05 10:54:48.165032] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:11:21.159 ************************************ 00:11:21.159 END TEST dd_flag_directory 00:11:21.159 ************************************ 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:21.159 00:11:21.159 real 0m1.009s 00:11:21.159 user 0m0.529s 00:11:21.159 sys 0m0.267s 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:21.159 ************************************ 00:11:21.159 START TEST dd_flag_nofollow 00:11:21.159 ************************************ 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:21.159 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:21.418 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.418 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:21.418 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.418 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:21.418 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.418 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:21.418 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:21.418 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:21.418 [2024-12-05 10:54:48.377626] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:21.418 [2024-12-05 10:54:48.377708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60225 ] 00:11:21.418 [2024-12-05 10:54:48.529667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.677 [2024-12-05 10:54:48.584216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.677 [2024-12-05 10:54:48.627828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:21.677 [2024-12-05 10:54:48.659498] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:21.677 [2024-12-05 10:54:48.659543] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:21.677 [2024-12-05 10:54:48.659560] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:21.677 [2024-12-05 10:54:48.757362] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:21.677 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:11:21.677 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:21.677 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:11:21.677 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:11:21.677 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:11:21.677 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:21.678 10:54:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:21.936 [2024-12-05 10:54:48.879511] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:21.936 [2024-12-05 10:54:48.879709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60234 ] 00:11:21.936 [2024-12-05 10:54:49.028478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.936 [2024-12-05 10:54:49.074588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.195 [2024-12-05 10:54:49.116609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.195 [2024-12-05 10:54:49.147059] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:22.195 [2024-12-05 10:54:49.147099] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:22.195 [2024-12-05 10:54:49.147117] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:22.195 [2024-12-05 10:54:49.243765] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:11:22.195 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:22.454 [2024-12-05 10:54:49.379555] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:22.455 [2024-12-05 10:54:49.379963] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60244 ] 00:11:22.455 [2024-12-05 10:54:49.533031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.455 [2024-12-05 10:54:49.588698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.713 [2024-12-05 10:54:49.631088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:22.714  [2024-12-05T10:54:49.873Z] Copying: 512/512 [B] (average 500 kBps) 00:11:22.714 00:11:22.714 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 3uo3j8zp170jtc0gvw6t8wea0yukx0o3qz04oxy3r2dwybyp3wqozs7fdfs5m3hz14fz7p6lb1cxpsn0gyn3pqywlyaxsl82qojpybcqincyvgaba7shwgumpespew38dct40fy9v1s2xhjtm18mjykkosq5624dc520qzk6jmkx7zg30q1pf1vovvbabsdjvikgwrezfxxal0oup402fq7sstahpso942nyfwuntcxdnyyxduwl57tm1kd5d6c1nudz20mwfxlb28q8aedmh7qsjajsikl5dabgz6gmyahrcxqqbci988y41nbpse2ehh01jtfblt8mfwh11cewu73twfkyfm5h00v8jszp8m3bdike75vg9arml8aagedrujgoiyi2y4eu0mfg1wai1rt6uf4zl4y0nhvre6vn4pthaaosnmvxtifsqb5sgglgqmxxfo6z12wfqsogj10u70tnm1r8f3obsrwdilz0chusso2bzaxust78l4lr7ljq == \3\u\o\3\j\8\z\p\1\7\0\j\t\c\0\g\v\w\6\t\8\w\e\a\0\y\u\k\x\0\o\3\q\z\0\4\o\x\y\3\r\2\d\w\y\b\y\p\3\w\q\o\z\s\7\f\d\f\s\5\m\3\h\z\1\4\f\z\7\p\6\l\b\1\c\x\p\s\n\0\g\y\n\3\p\q\y\w\l\y\a\x\s\l\8\2\q\o\j\p\y\b\c\q\i\n\c\y\v\g\a\b\a\7\s\h\w\g\u\m\p\e\s\p\e\w\3\8\d\c\t\4\0\f\y\9\v\1\s\2\x\h\j\t\m\1\8\m\j\y\k\k\o\s\q\5\6\2\4\d\c\5\2\0\q\z\k\6\j\m\k\x\7\z\g\3\0\q\1\p\f\1\v\o\v\v\b\a\b\s\d\j\v\i\k\g\w\r\e\z\f\x\x\a\l\0\o\u\p\4\0\2\f\q\7\s\s\t\a\h\p\s\o\9\4\2\n\y\f\w\u\n\t\c\x\d\n\y\y\x\d\u\w\l\5\7\t\m\1\k\d\5\d\6\c\1\n\u\d\z\2\0\m\w\f\x\l\b\2\8\q\8\a\e\d\m\h\7\q\s\j\a\j\s\i\k\l\5\d\a\b\g\z\6\g\m\y\a\h\r\c\x\q\q\b\c\i\9\8\8\y\4\1\n\b\p\s\e\2\e\h\h\0\1\j\t\f\b\l\t\8\m\f\w\h\1\1\c\e\w\u\7\3\t\w\f\k\y\f\m\5\h\0\0\v\8\j\s\z\p\8\m\3\b\d\i\k\e\7\5\v\g\9\a\r\m\l\8\a\a\g\e\d\r\u\j\g\o\i\y\i\2\y\4\e\u\0\m\f\g\1\w\a\i\1\r\t\6\u\f\4\z\l\4\y\0\n\h\v\r\e\6\v\n\4\p\t\h\a\a\o\s\n\m\v\x\t\i\f\s\q\b\5\s\g\g\l\g\q\m\x\x\f\o\6\z\1\2\w\f\q\s\o\g\j\1\0\u\7\0\t\n\m\1\r\8\f\3\o\b\s\r\w\d\i\l\z\0\c\h\u\s\s\o\2\b\z\a\x\u\s\t\7\8\l\4\l\r\7\l\j\q ]] 00:11:22.714 00:11:22.714 real 0m1.527s 00:11:22.714 user 0m0.795s 00:11:22.714 sys 0m0.529s 00:11:22.714 ************************************ 00:11:22.714 END TEST dd_flag_nofollow 00:11:22.714 ************************************ 00:11:22.714 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.714 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:22.973 ************************************ 00:11:22.973 START TEST dd_flag_noatime 00:11:22.973 ************************************ 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733396089 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733396089 00:11:22.973 10:54:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:11:23.945 10:54:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:23.945 [2024-12-05 10:54:50.986501] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:23.945 [2024-12-05 10:54:50.986578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60286 ] 00:11:24.203 [2024-12-05 10:54:51.137490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.203 [2024-12-05 10:54:51.180470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.203 [2024-12-05 10:54:51.222565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.203  [2024-12-05T10:54:51.621Z] Copying: 512/512 [B] (average 500 kBps) 00:11:24.462 00:11:24.462 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:24.462 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733396089 )) 00:11:24.462 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:24.462 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733396089 )) 00:11:24.462 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:24.462 [2024-12-05 10:54:51.467568] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:24.462 [2024-12-05 10:54:51.467679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60300 ] 00:11:24.462 [2024-12-05 10:54:51.620605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.721 [2024-12-05 10:54:51.676608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.721 [2024-12-05 10:54:51.718690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.721  [2024-12-05T10:54:52.139Z] Copying: 512/512 [B] (average 500 kBps) 00:11:24.980 00:11:24.980 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:24.980 ************************************ 00:11:24.980 END TEST dd_flag_noatime 00:11:24.980 ************************************ 00:11:24.980 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733396091 )) 00:11:24.980 00:11:24.980 real 0m2.029s 00:11:24.980 user 0m0.546s 00:11:24.980 sys 0m0.489s 00:11:24.980 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.980 10:54:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:11:24.980 10:54:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:11:24.980 10:54:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:24.980 10:54:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.980 10:54:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:24.980 ************************************ 00:11:24.980 START TEST dd_flags_misc 00:11:24.980 ************************************ 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:24.980 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:24.980 [2024-12-05 10:54:52.071932] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:24.980 [2024-12-05 10:54:52.072186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60328 ] 00:11:25.239 [2024-12-05 10:54:52.222934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.239 [2024-12-05 10:54:52.277723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.239 [2024-12-05 10:54:52.319872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.239  [2024-12-05T10:54:52.657Z] Copying: 512/512 [B] (average 500 kBps) 00:11:25.498 00:11:25.498 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jftrnzntcah2urmesw1m3cuwiss45afourpmf55na8mqgs61ar1xnahozq5sc51wjyv98w1sobc93zstfxlmonai2jl3thr4hzumbptxnhhtmvrdnij9jlyj95rpo6if5fw5n39mcs460kfuo5bizvk40p1tny8nn8jovv8atohv46050txrvgwnchxwibxsuma74zwa0skcyzlbbpuxbf76rt1f5bj9ysukw3gqyp4ld1bnctgktk50sappzyyfz9t5hj09xkzr6bfxc9hzmqv6s062fvpdmxhor317uskj1tp67mlewtm6dvk9cugjqk04ezabcp6nd99cp38j9sbmu2jqi49fq5ahs4aooakx3953ffs0pjedlbs6ylpjtawi2uz6mnxjvxrtvmgxigzzx8k1opvvk9waxtd2vm8ltuw44dgw9h3gaewdnttrp69sipn6ggt5e8sw8u5488ouxy6ltg73e9kncf0tjkp2rdjbjvdvacmaawvaddpv == \j\f\t\r\n\z\n\t\c\a\h\2\u\r\m\e\s\w\1\m\3\c\u\w\i\s\s\4\5\a\f\o\u\r\p\m\f\5\5\n\a\8\m\q\g\s\6\1\a\r\1\x\n\a\h\o\z\q\5\s\c\5\1\w\j\y\v\9\8\w\1\s\o\b\c\9\3\z\s\t\f\x\l\m\o\n\a\i\2\j\l\3\t\h\r\4\h\z\u\m\b\p\t\x\n\h\h\t\m\v\r\d\n\i\j\9\j\l\y\j\9\5\r\p\o\6\i\f\5\f\w\5\n\3\9\m\c\s\4\6\0\k\f\u\o\5\b\i\z\v\k\4\0\p\1\t\n\y\8\n\n\8\j\o\v\v\8\a\t\o\h\v\4\6\0\5\0\t\x\r\v\g\w\n\c\h\x\w\i\b\x\s\u\m\a\7\4\z\w\a\0\s\k\c\y\z\l\b\b\p\u\x\b\f\7\6\r\t\1\f\5\b\j\9\y\s\u\k\w\3\g\q\y\p\4\l\d\1\b\n\c\t\g\k\t\k\5\0\s\a\p\p\z\y\y\f\z\9\t\5\h\j\0\9\x\k\z\r\6\b\f\x\c\9\h\z\m\q\v\6\s\0\6\2\f\v\p\d\m\x\h\o\r\3\1\7\u\s\k\j\1\t\p\6\7\m\l\e\w\t\m\6\d\v\k\9\c\u\g\j\q\k\0\4\e\z\a\b\c\p\6\n\d\9\9\c\p\3\8\j\9\s\b\m\u\2\j\q\i\4\9\f\q\5\a\h\s\4\a\o\o\a\k\x\3\9\5\3\f\f\s\0\p\j\e\d\l\b\s\6\y\l\p\j\t\a\w\i\2\u\z\6\m\n\x\j\v\x\r\t\v\m\g\x\i\g\z\z\x\8\k\1\o\p\v\v\k\9\w\a\x\t\d\2\v\m\8\l\t\u\w\4\4\d\g\w\9\h\3\g\a\e\w\d\n\t\t\r\p\6\9\s\i\p\n\6\g\g\t\5\e\8\s\w\8\u\5\4\8\8\o\u\x\y\6\l\t\g\7\3\e\9\k\n\c\f\0\t\j\k\p\2\r\d\j\b\j\v\d\v\a\c\m\a\a\w\v\a\d\d\p\v ]] 00:11:25.498 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:25.498 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:25.498 [2024-12-05 10:54:52.562008] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:25.498 [2024-12-05 10:54:52.562263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:11:25.757 [2024-12-05 10:54:52.711494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.757 [2024-12-05 10:54:52.765123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.757 [2024-12-05 10:54:52.807039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.757  [2024-12-05T10:54:53.175Z] Copying: 512/512 [B] (average 500 kBps) 00:11:26.016 00:11:26.017 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jftrnzntcah2urmesw1m3cuwiss45afourpmf55na8mqgs61ar1xnahozq5sc51wjyv98w1sobc93zstfxlmonai2jl3thr4hzumbptxnhhtmvrdnij9jlyj95rpo6if5fw5n39mcs460kfuo5bizvk40p1tny8nn8jovv8atohv46050txrvgwnchxwibxsuma74zwa0skcyzlbbpuxbf76rt1f5bj9ysukw3gqyp4ld1bnctgktk50sappzyyfz9t5hj09xkzr6bfxc9hzmqv6s062fvpdmxhor317uskj1tp67mlewtm6dvk9cugjqk04ezabcp6nd99cp38j9sbmu2jqi49fq5ahs4aooakx3953ffs0pjedlbs6ylpjtawi2uz6mnxjvxrtvmgxigzzx8k1opvvk9waxtd2vm8ltuw44dgw9h3gaewdnttrp69sipn6ggt5e8sw8u5488ouxy6ltg73e9kncf0tjkp2rdjbjvdvacmaawvaddpv == \j\f\t\r\n\z\n\t\c\a\h\2\u\r\m\e\s\w\1\m\3\c\u\w\i\s\s\4\5\a\f\o\u\r\p\m\f\5\5\n\a\8\m\q\g\s\6\1\a\r\1\x\n\a\h\o\z\q\5\s\c\5\1\w\j\y\v\9\8\w\1\s\o\b\c\9\3\z\s\t\f\x\l\m\o\n\a\i\2\j\l\3\t\h\r\4\h\z\u\m\b\p\t\x\n\h\h\t\m\v\r\d\n\i\j\9\j\l\y\j\9\5\r\p\o\6\i\f\5\f\w\5\n\3\9\m\c\s\4\6\0\k\f\u\o\5\b\i\z\v\k\4\0\p\1\t\n\y\8\n\n\8\j\o\v\v\8\a\t\o\h\v\4\6\0\5\0\t\x\r\v\g\w\n\c\h\x\w\i\b\x\s\u\m\a\7\4\z\w\a\0\s\k\c\y\z\l\b\b\p\u\x\b\f\7\6\r\t\1\f\5\b\j\9\y\s\u\k\w\3\g\q\y\p\4\l\d\1\b\n\c\t\g\k\t\k\5\0\s\a\p\p\z\y\y\f\z\9\t\5\h\j\0\9\x\k\z\r\6\b\f\x\c\9\h\z\m\q\v\6\s\0\6\2\f\v\p\d\m\x\h\o\r\3\1\7\u\s\k\j\1\t\p\6\7\m\l\e\w\t\m\6\d\v\k\9\c\u\g\j\q\k\0\4\e\z\a\b\c\p\6\n\d\9\9\c\p\3\8\j\9\s\b\m\u\2\j\q\i\4\9\f\q\5\a\h\s\4\a\o\o\a\k\x\3\9\5\3\f\f\s\0\p\j\e\d\l\b\s\6\y\l\p\j\t\a\w\i\2\u\z\6\m\n\x\j\v\x\r\t\v\m\g\x\i\g\z\z\x\8\k\1\o\p\v\v\k\9\w\a\x\t\d\2\v\m\8\l\t\u\w\4\4\d\g\w\9\h\3\g\a\e\w\d\n\t\t\r\p\6\9\s\i\p\n\6\g\g\t\5\e\8\s\w\8\u\5\4\8\8\o\u\x\y\6\l\t\g\7\3\e\9\k\n\c\f\0\t\j\k\p\2\r\d\j\b\j\v\d\v\a\c\m\a\a\w\v\a\d\d\p\v ]] 00:11:26.017 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:26.017 10:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:26.017 [2024-12-05 10:54:53.051606] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:26.017 [2024-12-05 10:54:53.051688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60347 ] 00:11:26.276 [2024-12-05 10:54:53.200785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.276 [2024-12-05 10:54:53.256158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.276 [2024-12-05 10:54:53.298600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:26.276  [2024-12-05T10:54:53.695Z] Copying: 512/512 [B] (average 83 kBps) 00:11:26.536 00:11:26.536 10:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jftrnzntcah2urmesw1m3cuwiss45afourpmf55na8mqgs61ar1xnahozq5sc51wjyv98w1sobc93zstfxlmonai2jl3thr4hzumbptxnhhtmvrdnij9jlyj95rpo6if5fw5n39mcs460kfuo5bizvk40p1tny8nn8jovv8atohv46050txrvgwnchxwibxsuma74zwa0skcyzlbbpuxbf76rt1f5bj9ysukw3gqyp4ld1bnctgktk50sappzyyfz9t5hj09xkzr6bfxc9hzmqv6s062fvpdmxhor317uskj1tp67mlewtm6dvk9cugjqk04ezabcp6nd99cp38j9sbmu2jqi49fq5ahs4aooakx3953ffs0pjedlbs6ylpjtawi2uz6mnxjvxrtvmgxigzzx8k1opvvk9waxtd2vm8ltuw44dgw9h3gaewdnttrp69sipn6ggt5e8sw8u5488ouxy6ltg73e9kncf0tjkp2rdjbjvdvacmaawvaddpv == \j\f\t\r\n\z\n\t\c\a\h\2\u\r\m\e\s\w\1\m\3\c\u\w\i\s\s\4\5\a\f\o\u\r\p\m\f\5\5\n\a\8\m\q\g\s\6\1\a\r\1\x\n\a\h\o\z\q\5\s\c\5\1\w\j\y\v\9\8\w\1\s\o\b\c\9\3\z\s\t\f\x\l\m\o\n\a\i\2\j\l\3\t\h\r\4\h\z\u\m\b\p\t\x\n\h\h\t\m\v\r\d\n\i\j\9\j\l\y\j\9\5\r\p\o\6\i\f\5\f\w\5\n\3\9\m\c\s\4\6\0\k\f\u\o\5\b\i\z\v\k\4\0\p\1\t\n\y\8\n\n\8\j\o\v\v\8\a\t\o\h\v\4\6\0\5\0\t\x\r\v\g\w\n\c\h\x\w\i\b\x\s\u\m\a\7\4\z\w\a\0\s\k\c\y\z\l\b\b\p\u\x\b\f\7\6\r\t\1\f\5\b\j\9\y\s\u\k\w\3\g\q\y\p\4\l\d\1\b\n\c\t\g\k\t\k\5\0\s\a\p\p\z\y\y\f\z\9\t\5\h\j\0\9\x\k\z\r\6\b\f\x\c\9\h\z\m\q\v\6\s\0\6\2\f\v\p\d\m\x\h\o\r\3\1\7\u\s\k\j\1\t\p\6\7\m\l\e\w\t\m\6\d\v\k\9\c\u\g\j\q\k\0\4\e\z\a\b\c\p\6\n\d\9\9\c\p\3\8\j\9\s\b\m\u\2\j\q\i\4\9\f\q\5\a\h\s\4\a\o\o\a\k\x\3\9\5\3\f\f\s\0\p\j\e\d\l\b\s\6\y\l\p\j\t\a\w\i\2\u\z\6\m\n\x\j\v\x\r\t\v\m\g\x\i\g\z\z\x\8\k\1\o\p\v\v\k\9\w\a\x\t\d\2\v\m\8\l\t\u\w\4\4\d\g\w\9\h\3\g\a\e\w\d\n\t\t\r\p\6\9\s\i\p\n\6\g\g\t\5\e\8\s\w\8\u\5\4\8\8\o\u\x\y\6\l\t\g\7\3\e\9\k\n\c\f\0\t\j\k\p\2\r\d\j\b\j\v\d\v\a\c\m\a\a\w\v\a\d\d\p\v ]] 00:11:26.536 10:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:26.536 10:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:26.536 [2024-12-05 10:54:53.551534] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:26.536 [2024-12-05 10:54:53.551618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60357 ] 00:11:26.795 [2024-12-05 10:54:53.705314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.795 [2024-12-05 10:54:53.760145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.795 [2024-12-05 10:54:53.802084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:26.795  [2024-12-05T10:54:54.212Z] Copying: 512/512 [B] (average 500 kBps) 00:11:27.053 00:11:27.053 10:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jftrnzntcah2urmesw1m3cuwiss45afourpmf55na8mqgs61ar1xnahozq5sc51wjyv98w1sobc93zstfxlmonai2jl3thr4hzumbptxnhhtmvrdnij9jlyj95rpo6if5fw5n39mcs460kfuo5bizvk40p1tny8nn8jovv8atohv46050txrvgwnchxwibxsuma74zwa0skcyzlbbpuxbf76rt1f5bj9ysukw3gqyp4ld1bnctgktk50sappzyyfz9t5hj09xkzr6bfxc9hzmqv6s062fvpdmxhor317uskj1tp67mlewtm6dvk9cugjqk04ezabcp6nd99cp38j9sbmu2jqi49fq5ahs4aooakx3953ffs0pjedlbs6ylpjtawi2uz6mnxjvxrtvmgxigzzx8k1opvvk9waxtd2vm8ltuw44dgw9h3gaewdnttrp69sipn6ggt5e8sw8u5488ouxy6ltg73e9kncf0tjkp2rdjbjvdvacmaawvaddpv == \j\f\t\r\n\z\n\t\c\a\h\2\u\r\m\e\s\w\1\m\3\c\u\w\i\s\s\4\5\a\f\o\u\r\p\m\f\5\5\n\a\8\m\q\g\s\6\1\a\r\1\x\n\a\h\o\z\q\5\s\c\5\1\w\j\y\v\9\8\w\1\s\o\b\c\9\3\z\s\t\f\x\l\m\o\n\a\i\2\j\l\3\t\h\r\4\h\z\u\m\b\p\t\x\n\h\h\t\m\v\r\d\n\i\j\9\j\l\y\j\9\5\r\p\o\6\i\f\5\f\w\5\n\3\9\m\c\s\4\6\0\k\f\u\o\5\b\i\z\v\k\4\0\p\1\t\n\y\8\n\n\8\j\o\v\v\8\a\t\o\h\v\4\6\0\5\0\t\x\r\v\g\w\n\c\h\x\w\i\b\x\s\u\m\a\7\4\z\w\a\0\s\k\c\y\z\l\b\b\p\u\x\b\f\7\6\r\t\1\f\5\b\j\9\y\s\u\k\w\3\g\q\y\p\4\l\d\1\b\n\c\t\g\k\t\k\5\0\s\a\p\p\z\y\y\f\z\9\t\5\h\j\0\9\x\k\z\r\6\b\f\x\c\9\h\z\m\q\v\6\s\0\6\2\f\v\p\d\m\x\h\o\r\3\1\7\u\s\k\j\1\t\p\6\7\m\l\e\w\t\m\6\d\v\k\9\c\u\g\j\q\k\0\4\e\z\a\b\c\p\6\n\d\9\9\c\p\3\8\j\9\s\b\m\u\2\j\q\i\4\9\f\q\5\a\h\s\4\a\o\o\a\k\x\3\9\5\3\f\f\s\0\p\j\e\d\l\b\s\6\y\l\p\j\t\a\w\i\2\u\z\6\m\n\x\j\v\x\r\t\v\m\g\x\i\g\z\z\x\8\k\1\o\p\v\v\k\9\w\a\x\t\d\2\v\m\8\l\t\u\w\4\4\d\g\w\9\h\3\g\a\e\w\d\n\t\t\r\p\6\9\s\i\p\n\6\g\g\t\5\e\8\s\w\8\u\5\4\8\8\o\u\x\y\6\l\t\g\7\3\e\9\k\n\c\f\0\t\j\k\p\2\r\d\j\b\j\v\d\v\a\c\m\a\a\w\v\a\d\d\p\v ]] 00:11:27.053 10:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:27.053 10:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:11:27.053 10:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:11:27.053 10:54:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:27.053 10:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:27.054 10:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:27.054 [2024-12-05 10:54:54.067335] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:27.054 [2024-12-05 10:54:54.067413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:11:27.331 [2024-12-05 10:54:54.216856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.331 [2024-12-05 10:54:54.269435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.332 [2024-12-05 10:54:54.311329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:27.332  [2024-12-05T10:54:54.748Z] Copying: 512/512 [B] (average 500 kBps) 00:11:27.589 00:11:27.590 10:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ z200ejdw2uhabeg7lr3m02g9g2tq52bojwrygvec3x0phwufeujo9a8ebj48j1yvvkvej3p4yj3ya84rc8r4tbsja953bgxibgpnbjond31mzj3gc5gdlrl6f3gxgtdyw6h3acdqknliy4gocao1zh6frp4ytnqkud9p64fcj9ceccaamepbn6hfus5ibthx734a8079uwx4rdrys4mcv7q786pnu134zwg0c9fu9vilu2zl4kkgntg8zxfdaymocep76x1u58j35swwzu91k26x6f63fr80zrkbh3sxkfc4wwwflj762ohbayfoqo1p0cwtsmozsqv9o2fxiny867e247012ge0tnk2e4bgq43qvzvs6pyvqgdwzzt8rjyahkq5ma459nevnqyetviebze5fn1q0ltmp94c66nmoki4s8rhzsddpl9cc31lv72ax1eo6kalk9qf971q3wr9tcbhdfq71mtijjno9ydygvcyy7f8zdxkhow6hnwuhyn4 == \z\2\0\0\e\j\d\w\2\u\h\a\b\e\g\7\l\r\3\m\0\2\g\9\g\2\t\q\5\2\b\o\j\w\r\y\g\v\e\c\3\x\0\p\h\w\u\f\e\u\j\o\9\a\8\e\b\j\4\8\j\1\y\v\v\k\v\e\j\3\p\4\y\j\3\y\a\8\4\r\c\8\r\4\t\b\s\j\a\9\5\3\b\g\x\i\b\g\p\n\b\j\o\n\d\3\1\m\z\j\3\g\c\5\g\d\l\r\l\6\f\3\g\x\g\t\d\y\w\6\h\3\a\c\d\q\k\n\l\i\y\4\g\o\c\a\o\1\z\h\6\f\r\p\4\y\t\n\q\k\u\d\9\p\6\4\f\c\j\9\c\e\c\c\a\a\m\e\p\b\n\6\h\f\u\s\5\i\b\t\h\x\7\3\4\a\8\0\7\9\u\w\x\4\r\d\r\y\s\4\m\c\v\7\q\7\8\6\p\n\u\1\3\4\z\w\g\0\c\9\f\u\9\v\i\l\u\2\z\l\4\k\k\g\n\t\g\8\z\x\f\d\a\y\m\o\c\e\p\7\6\x\1\u\5\8\j\3\5\s\w\w\z\u\9\1\k\2\6\x\6\f\6\3\f\r\8\0\z\r\k\b\h\3\s\x\k\f\c\4\w\w\w\f\l\j\7\6\2\o\h\b\a\y\f\o\q\o\1\p\0\c\w\t\s\m\o\z\s\q\v\9\o\2\f\x\i\n\y\8\6\7\e\2\4\7\0\1\2\g\e\0\t\n\k\2\e\4\b\g\q\4\3\q\v\z\v\s\6\p\y\v\q\g\d\w\z\z\t\8\r\j\y\a\h\k\q\5\m\a\4\5\9\n\e\v\n\q\y\e\t\v\i\e\b\z\e\5\f\n\1\q\0\l\t\m\p\9\4\c\6\6\n\m\o\k\i\4\s\8\r\h\z\s\d\d\p\l\9\c\c\3\1\l\v\7\2\a\x\1\e\o\6\k\a\l\k\9\q\f\9\7\1\q\3\w\r\9\t\c\b\h\d\f\q\7\1\m\t\i\j\j\n\o\9\y\d\y\g\v\c\y\y\7\f\8\z\d\x\k\h\o\w\6\h\n\w\u\h\y\n\4 ]] 00:11:27.590 10:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:27.590 10:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:27.590 [2024-12-05 10:54:54.554921] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:27.590 [2024-12-05 10:54:54.555003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60376 ] 00:11:27.590 [2024-12-05 10:54:54.689395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.590 [2024-12-05 10:54:54.745613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.848 [2024-12-05 10:54:54.787958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:27.848  [2024-12-05T10:54:55.007Z] Copying: 512/512 [B] (average 500 kBps) 00:11:27.848 00:11:27.848 10:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ z200ejdw2uhabeg7lr3m02g9g2tq52bojwrygvec3x0phwufeujo9a8ebj48j1yvvkvej3p4yj3ya84rc8r4tbsja953bgxibgpnbjond31mzj3gc5gdlrl6f3gxgtdyw6h3acdqknliy4gocao1zh6frp4ytnqkud9p64fcj9ceccaamepbn6hfus5ibthx734a8079uwx4rdrys4mcv7q786pnu134zwg0c9fu9vilu2zl4kkgntg8zxfdaymocep76x1u58j35swwzu91k26x6f63fr80zrkbh3sxkfc4wwwflj762ohbayfoqo1p0cwtsmozsqv9o2fxiny867e247012ge0tnk2e4bgq43qvzvs6pyvqgdwzzt8rjyahkq5ma459nevnqyetviebze5fn1q0ltmp94c66nmoki4s8rhzsddpl9cc31lv72ax1eo6kalk9qf971q3wr9tcbhdfq71mtijjno9ydygvcyy7f8zdxkhow6hnwuhyn4 == \z\2\0\0\e\j\d\w\2\u\h\a\b\e\g\7\l\r\3\m\0\2\g\9\g\2\t\q\5\2\b\o\j\w\r\y\g\v\e\c\3\x\0\p\h\w\u\f\e\u\j\o\9\a\8\e\b\j\4\8\j\1\y\v\v\k\v\e\j\3\p\4\y\j\3\y\a\8\4\r\c\8\r\4\t\b\s\j\a\9\5\3\b\g\x\i\b\g\p\n\b\j\o\n\d\3\1\m\z\j\3\g\c\5\g\d\l\r\l\6\f\3\g\x\g\t\d\y\w\6\h\3\a\c\d\q\k\n\l\i\y\4\g\o\c\a\o\1\z\h\6\f\r\p\4\y\t\n\q\k\u\d\9\p\6\4\f\c\j\9\c\e\c\c\a\a\m\e\p\b\n\6\h\f\u\s\5\i\b\t\h\x\7\3\4\a\8\0\7\9\u\w\x\4\r\d\r\y\s\4\m\c\v\7\q\7\8\6\p\n\u\1\3\4\z\w\g\0\c\9\f\u\9\v\i\l\u\2\z\l\4\k\k\g\n\t\g\8\z\x\f\d\a\y\m\o\c\e\p\7\6\x\1\u\5\8\j\3\5\s\w\w\z\u\9\1\k\2\6\x\6\f\6\3\f\r\8\0\z\r\k\b\h\3\s\x\k\f\c\4\w\w\w\f\l\j\7\6\2\o\h\b\a\y\f\o\q\o\1\p\0\c\w\t\s\m\o\z\s\q\v\9\o\2\f\x\i\n\y\8\6\7\e\2\4\7\0\1\2\g\e\0\t\n\k\2\e\4\b\g\q\4\3\q\v\z\v\s\6\p\y\v\q\g\d\w\z\z\t\8\r\j\y\a\h\k\q\5\m\a\4\5\9\n\e\v\n\q\y\e\t\v\i\e\b\z\e\5\f\n\1\q\0\l\t\m\p\9\4\c\6\6\n\m\o\k\i\4\s\8\r\h\z\s\d\d\p\l\9\c\c\3\1\l\v\7\2\a\x\1\e\o\6\k\a\l\k\9\q\f\9\7\1\q\3\w\r\9\t\c\b\h\d\f\q\7\1\m\t\i\j\j\n\o\9\y\d\y\g\v\c\y\y\7\f\8\z\d\x\k\h\o\w\6\h\n\w\u\h\y\n\4 ]] 00:11:27.848 10:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:27.848 10:54:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:28.109 [2024-12-05 10:54:55.032289] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:28.109 [2024-12-05 10:54:55.032403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60380 ] 00:11:28.109 [2024-12-05 10:54:55.183502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.109 [2024-12-05 10:54:55.238865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.368 [2024-12-05 10:54:55.280713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.368  [2024-12-05T10:54:55.527Z] Copying: 512/512 [B] (average 125 kBps) 00:11:28.368 00:11:28.368 10:54:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ z200ejdw2uhabeg7lr3m02g9g2tq52bojwrygvec3x0phwufeujo9a8ebj48j1yvvkvej3p4yj3ya84rc8r4tbsja953bgxibgpnbjond31mzj3gc5gdlrl6f3gxgtdyw6h3acdqknliy4gocao1zh6frp4ytnqkud9p64fcj9ceccaamepbn6hfus5ibthx734a8079uwx4rdrys4mcv7q786pnu134zwg0c9fu9vilu2zl4kkgntg8zxfdaymocep76x1u58j35swwzu91k26x6f63fr80zrkbh3sxkfc4wwwflj762ohbayfoqo1p0cwtsmozsqv9o2fxiny867e247012ge0tnk2e4bgq43qvzvs6pyvqgdwzzt8rjyahkq5ma459nevnqyetviebze5fn1q0ltmp94c66nmoki4s8rhzsddpl9cc31lv72ax1eo6kalk9qf971q3wr9tcbhdfq71mtijjno9ydygvcyy7f8zdxkhow6hnwuhyn4 == \z\2\0\0\e\j\d\w\2\u\h\a\b\e\g\7\l\r\3\m\0\2\g\9\g\2\t\q\5\2\b\o\j\w\r\y\g\v\e\c\3\x\0\p\h\w\u\f\e\u\j\o\9\a\8\e\b\j\4\8\j\1\y\v\v\k\v\e\j\3\p\4\y\j\3\y\a\8\4\r\c\8\r\4\t\b\s\j\a\9\5\3\b\g\x\i\b\g\p\n\b\j\o\n\d\3\1\m\z\j\3\g\c\5\g\d\l\r\l\6\f\3\g\x\g\t\d\y\w\6\h\3\a\c\d\q\k\n\l\i\y\4\g\o\c\a\o\1\z\h\6\f\r\p\4\y\t\n\q\k\u\d\9\p\6\4\f\c\j\9\c\e\c\c\a\a\m\e\p\b\n\6\h\f\u\s\5\i\b\t\h\x\7\3\4\a\8\0\7\9\u\w\x\4\r\d\r\y\s\4\m\c\v\7\q\7\8\6\p\n\u\1\3\4\z\w\g\0\c\9\f\u\9\v\i\l\u\2\z\l\4\k\k\g\n\t\g\8\z\x\f\d\a\y\m\o\c\e\p\7\6\x\1\u\5\8\j\3\5\s\w\w\z\u\9\1\k\2\6\x\6\f\6\3\f\r\8\0\z\r\k\b\h\3\s\x\k\f\c\4\w\w\w\f\l\j\7\6\2\o\h\b\a\y\f\o\q\o\1\p\0\c\w\t\s\m\o\z\s\q\v\9\o\2\f\x\i\n\y\8\6\7\e\2\4\7\0\1\2\g\e\0\t\n\k\2\e\4\b\g\q\4\3\q\v\z\v\s\6\p\y\v\q\g\d\w\z\z\t\8\r\j\y\a\h\k\q\5\m\a\4\5\9\n\e\v\n\q\y\e\t\v\i\e\b\z\e\5\f\n\1\q\0\l\t\m\p\9\4\c\6\6\n\m\o\k\i\4\s\8\r\h\z\s\d\d\p\l\9\c\c\3\1\l\v\7\2\a\x\1\e\o\6\k\a\l\k\9\q\f\9\7\1\q\3\w\r\9\t\c\b\h\d\f\q\7\1\m\t\i\j\j\n\o\9\y\d\y\g\v\c\y\y\7\f\8\z\d\x\k\h\o\w\6\h\n\w\u\h\y\n\4 ]] 00:11:28.368 10:54:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:28.368 10:54:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:28.628 [2024-12-05 10:54:55.530773] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:28.628 [2024-12-05 10:54:55.531234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60395 ] 00:11:28.628 [2024-12-05 10:54:55.682197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.628 [2024-12-05 10:54:55.736552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.628 [2024-12-05 10:54:55.778781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:28.886  [2024-12-05T10:54:56.045Z] Copying: 512/512 [B] (average 250 kBps) 00:11:28.886 00:11:28.887 10:54:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ z200ejdw2uhabeg7lr3m02g9g2tq52bojwrygvec3x0phwufeujo9a8ebj48j1yvvkvej3p4yj3ya84rc8r4tbsja953bgxibgpnbjond31mzj3gc5gdlrl6f3gxgtdyw6h3acdqknliy4gocao1zh6frp4ytnqkud9p64fcj9ceccaamepbn6hfus5ibthx734a8079uwx4rdrys4mcv7q786pnu134zwg0c9fu9vilu2zl4kkgntg8zxfdaymocep76x1u58j35swwzu91k26x6f63fr80zrkbh3sxkfc4wwwflj762ohbayfoqo1p0cwtsmozsqv9o2fxiny867e247012ge0tnk2e4bgq43qvzvs6pyvqgdwzzt8rjyahkq5ma459nevnqyetviebze5fn1q0ltmp94c66nmoki4s8rhzsddpl9cc31lv72ax1eo6kalk9qf971q3wr9tcbhdfq71mtijjno9ydygvcyy7f8zdxkhow6hnwuhyn4 == \z\2\0\0\e\j\d\w\2\u\h\a\b\e\g\7\l\r\3\m\0\2\g\9\g\2\t\q\5\2\b\o\j\w\r\y\g\v\e\c\3\x\0\p\h\w\u\f\e\u\j\o\9\a\8\e\b\j\4\8\j\1\y\v\v\k\v\e\j\3\p\4\y\j\3\y\a\8\4\r\c\8\r\4\t\b\s\j\a\9\5\3\b\g\x\i\b\g\p\n\b\j\o\n\d\3\1\m\z\j\3\g\c\5\g\d\l\r\l\6\f\3\g\x\g\t\d\y\w\6\h\3\a\c\d\q\k\n\l\i\y\4\g\o\c\a\o\1\z\h\6\f\r\p\4\y\t\n\q\k\u\d\9\p\6\4\f\c\j\9\c\e\c\c\a\a\m\e\p\b\n\6\h\f\u\s\5\i\b\t\h\x\7\3\4\a\8\0\7\9\u\w\x\4\r\d\r\y\s\4\m\c\v\7\q\7\8\6\p\n\u\1\3\4\z\w\g\0\c\9\f\u\9\v\i\l\u\2\z\l\4\k\k\g\n\t\g\8\z\x\f\d\a\y\m\o\c\e\p\7\6\x\1\u\5\8\j\3\5\s\w\w\z\u\9\1\k\2\6\x\6\f\6\3\f\r\8\0\z\r\k\b\h\3\s\x\k\f\c\4\w\w\w\f\l\j\7\6\2\o\h\b\a\y\f\o\q\o\1\p\0\c\w\t\s\m\o\z\s\q\v\9\o\2\f\x\i\n\y\8\6\7\e\2\4\7\0\1\2\g\e\0\t\n\k\2\e\4\b\g\q\4\3\q\v\z\v\s\6\p\y\v\q\g\d\w\z\z\t\8\r\j\y\a\h\k\q\5\m\a\4\5\9\n\e\v\n\q\y\e\t\v\i\e\b\z\e\5\f\n\1\q\0\l\t\m\p\9\4\c\6\6\n\m\o\k\i\4\s\8\r\h\z\s\d\d\p\l\9\c\c\3\1\l\v\7\2\a\x\1\e\o\6\k\a\l\k\9\q\f\9\7\1\q\3\w\r\9\t\c\b\h\d\f\q\7\1\m\t\i\j\j\n\o\9\y\d\y\g\v\c\y\y\7\f\8\z\d\x\k\h\o\w\6\h\n\w\u\h\y\n\4 ]] 00:11:28.887 00:11:28.887 real 0m3.970s 00:11:28.887 user 0m2.127s 00:11:28.887 sys 0m1.875s 00:11:28.887 ************************************ 00:11:28.887 END TEST dd_flags_misc 00:11:28.887 ************************************ 00:11:28.887 10:54:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.887 10:54:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:11:28.887 10:54:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:11:28.887 10:54:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:11:28.887 * Second test run, disabling liburing, forcing AIO 00:11:28.887 10:54:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:11:28.887 10:54:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:11:28.887 10:54:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.887 10:54:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.887 10:54:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:29.146 ************************************ 00:11:29.146 START TEST dd_flag_append_forced_aio 00:11:29.146 ************************************ 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=vygqtqhy8rtjsoesi7109u4rt6os1av2 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=cfn88mjpbtxdd95ehcvlxhypiscdon2o 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s vygqtqhy8rtjsoesi7109u4rt6os1av2 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s cfn88mjpbtxdd95ehcvlxhypiscdon2o 00:11:29.146 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:11:29.146 [2024-12-05 10:54:56.116240] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:29.146 [2024-12-05 10:54:56.116377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60418 ] 00:11:29.146 [2024-12-05 10:54:56.281675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.405 [2024-12-05 10:54:56.334904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.405 [2024-12-05 10:54:56.376362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:29.405  [2024-12-05T10:54:56.823Z] Copying: 32/32 [B] (average 31 kBps) 00:11:29.664 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ cfn88mjpbtxdd95ehcvlxhypiscdon2ovygqtqhy8rtjsoesi7109u4rt6os1av2 == \c\f\n\8\8\m\j\p\b\t\x\d\d\9\5\e\h\c\v\l\x\h\y\p\i\s\c\d\o\n\2\o\v\y\g\q\t\q\h\y\8\r\t\j\s\o\e\s\i\7\1\0\9\u\4\r\t\6\o\s\1\a\v\2 ]] 00:11:29.664 00:11:29.664 real 0m0.541s 00:11:29.664 user 0m0.297s 00:11:29.664 sys 0m0.123s 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:29.664 ************************************ 00:11:29.664 END TEST dd_flag_append_forced_aio 00:11:29.664 ************************************ 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:29.664 ************************************ 00:11:29.664 START TEST dd_flag_directory_forced_aio 00:11:29.664 ************************************ 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:29.664 10:54:56 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:29.664 [2024-12-05 10:54:56.730877] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:29.664 [2024-12-05 10:54:56.731126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60450 ] 00:11:29.922 [2024-12-05 10:54:56.882403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.922 [2024-12-05 10:54:56.935919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.922 [2024-12-05 10:54:56.977513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:29.922 [2024-12-05 10:54:57.008766] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:29.922 [2024-12-05 10:54:57.008810] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:29.922 [2024-12-05 10:54:57.008827] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.181 [2024-12-05 10:54:57.106564] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:30.181 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:11:30.181 [2024-12-05 10:54:57.229048] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:30.181 [2024-12-05 10:54:57.229129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60455 ] 00:11:30.441 [2024-12-05 10:54:57.377634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.441 [2024-12-05 10:54:57.431452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.441 [2024-12-05 10:54:57.472837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:30.441 [2024-12-05 10:54:57.504299] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:30.441 [2024-12-05 10:54:57.504563] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:11:30.441 [2024-12-05 10:54:57.504590] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.441 [2024-12-05 10:54:57.600951] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:30.700 ************************************ 00:11:30.700 END TEST dd_flag_directory_forced_aio 00:11:30.700 ************************************ 00:11:30.700 00:11:30.700 real 0m0.993s 00:11:30.700 user 0m0.525s 00:11:30.700 sys 0m0.258s 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:30.700 ************************************ 00:11:30.700 START TEST dd_flag_nofollow_forced_aio 00:11:30.700 ************************************ 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:30.700 10:54:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:30.700 [2024-12-05 10:54:57.802510] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:30.700 [2024-12-05 10:54:57.802780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60488 ] 00:11:30.959 [2024-12-05 10:54:57.954397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.959 [2024-12-05 10:54:58.007432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.959 [2024-12-05 10:54:58.048669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:30.959 [2024-12-05 10:54:58.080726] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:30.959 [2024-12-05 10:54:58.080776] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:11:30.959 [2024-12-05 10:54:58.080793] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.218 [2024-12-05 10:54:58.182103] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.218 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.219 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.219 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.219 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:31.219 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:11:31.219 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:11:31.219 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:11:31.219 [2024-12-05 10:54:58.317129] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:31.219 [2024-12-05 10:54:58.317210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60496 ] 00:11:31.478 [2024-12-05 10:54:58.465645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.478 [2024-12-05 10:54:58.515957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.478 [2024-12-05 10:54:58.557508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.478 [2024-12-05 10:54:58.589244] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:31.478 [2024-12-05 10:54:58.589314] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:11:31.478 [2024-12-05 10:54:58.589333] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:31.737 [2024-12-05 10:54:58.686607] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:31.737 10:54:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:31.737 [2024-12-05 10:54:58.813549] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:31.737 [2024-12-05 10:54:58.813633] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60505 ] 00:11:31.996 [2024-12-05 10:54:58.962900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.996 [2024-12-05 10:54:59.015269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.996 [2024-12-05 10:54:59.056706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.996  [2024-12-05T10:54:59.413Z] Copying: 512/512 [B] (average 500 kBps) 00:11:32.254 00:11:32.254 ************************************ 00:11:32.254 END TEST dd_flag_nofollow_forced_aio 00:11:32.254 ************************************ 00:11:32.254 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ juccpreoog1eyvxfdy8onvujjbaayfbik0cmtb7xkyl7q7wh1wumoysemxm8oqmyj5l4hzc3i0k0tv439cbjx4qbte4fnd1ddnp73eln6xy4qf3glvj0q63rgm4fbw9t3mn5kbkhqxoopnsk0jhze5p7fjr2n1l9k81vng7mfz59q9hgu2t1ps2a13opop6nqme4c9krrhu2nlym7723hejx4z3shqs7n4nd5poo3ygf6xm7boidqk6m7wqi8yrtobua1v3h2umqurdu8tqugvblo8yvmp9msspm18on9o1de5zreap9suy9m4kfphfzog8j4d3zhrnnijtwwy69anr04if6i9j2yvwmr7oiyeshojt4e2tc6twovntdkez00a8m1izeff4d3k45okcb0stfrz7nqr283ce9qbp0cqyek59beerncc4iu13b9b01mie1qeu4ex6t0hsndiufyygz6w2giqly1k59n62fjxpycgppo6sh96ieek86zqtk == \j\u\c\c\p\r\e\o\o\g\1\e\y\v\x\f\d\y\8\o\n\v\u\j\j\b\a\a\y\f\b\i\k\0\c\m\t\b\7\x\k\y\l\7\q\7\w\h\1\w\u\m\o\y\s\e\m\x\m\8\o\q\m\y\j\5\l\4\h\z\c\3\i\0\k\0\t\v\4\3\9\c\b\j\x\4\q\b\t\e\4\f\n\d\1\d\d\n\p\7\3\e\l\n\6\x\y\4\q\f\3\g\l\v\j\0\q\6\3\r\g\m\4\f\b\w\9\t\3\m\n\5\k\b\k\h\q\x\o\o\p\n\s\k\0\j\h\z\e\5\p\7\f\j\r\2\n\1\l\9\k\8\1\v\n\g\7\m\f\z\5\9\q\9\h\g\u\2\t\1\p\s\2\a\1\3\o\p\o\p\6\n\q\m\e\4\c\9\k\r\r\h\u\2\n\l\y\m\7\7\2\3\h\e\j\x\4\z\3\s\h\q\s\7\n\4\n\d\5\p\o\o\3\y\g\f\6\x\m\7\b\o\i\d\q\k\6\m\7\w\q\i\8\y\r\t\o\b\u\a\1\v\3\h\2\u\m\q\u\r\d\u\8\t\q\u\g\v\b\l\o\8\y\v\m\p\9\m\s\s\p\m\1\8\o\n\9\o\1\d\e\5\z\r\e\a\p\9\s\u\y\9\m\4\k\f\p\h\f\z\o\g\8\j\4\d\3\z\h\r\n\n\i\j\t\w\w\y\6\9\a\n\r\0\4\i\f\6\i\9\j\2\y\v\w\m\r\7\o\i\y\e\s\h\o\j\t\4\e\2\t\c\6\t\w\o\v\n\t\d\k\e\z\0\0\a\8\m\1\i\z\e\f\f\4\d\3\k\4\5\o\k\c\b\0\s\t\f\r\z\7\n\q\r\2\8\3\c\e\9\q\b\p\0\c\q\y\e\k\5\9\b\e\e\r\n\c\c\4\i\u\1\3\b\9\b\0\1\m\i\e\1\q\e\u\4\e\x\6\t\0\h\s\n\d\i\u\f\y\y\g\z\6\w\2\g\i\q\l\y\1\k\5\9\n\6\2\f\j\x\p\y\c\g\p\p\o\6\s\h\9\6\i\e\e\k\8\6\z\q\t\k ]] 00:11:32.254 00:11:32.254 real 0m1.527s 00:11:32.254 user 0m0.818s 00:11:32.254 sys 0m0.377s 00:11:32.254 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.254 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:32.254 10:54:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:11:32.254 10:54:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:32.254 10:54:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.254 10:54:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:32.254 ************************************ 00:11:32.254 START TEST dd_flag_noatime_forced_aio 00:11:32.254 ************************************ 00:11:32.254 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733396099 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733396099 00:11:32.255 10:54:59 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:11:33.633 10:55:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:33.633 [2024-12-05 10:55:00.405719] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:33.633 [2024-12-05 10:55:00.405790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60540 ] 00:11:33.633 [2024-12-05 10:55:00.558190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.633 [2024-12-05 10:55:00.610307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.633 [2024-12-05 10:55:00.651617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:33.633  [2024-12-05T10:55:01.081Z] Copying: 512/512 [B] (average 500 kBps) 00:11:33.922 00:11:33.922 10:55:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:33.922 10:55:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733396099 )) 00:11:33.922 10:55:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:33.922 10:55:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733396099 )) 00:11:33.922 10:55:00 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:33.922 [2024-12-05 10:55:00.925574] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:33.922 [2024-12-05 10:55:00.925655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60557 ] 00:11:33.922 [2024-12-05 10:55:01.075614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.179 [2024-12-05 10:55:01.128102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.179 [2024-12-05 10:55:01.169166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.179  [2024-12-05T10:55:01.597Z] Copying: 512/512 [B] (average 500 kBps) 00:11:34.438 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733396101 )) 00:11:34.438 00:11:34.438 real 0m2.054s 00:11:34.438 user 0m0.533s 00:11:34.438 sys 0m0.282s 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:34.438 ************************************ 00:11:34.438 END TEST dd_flag_noatime_forced_aio 00:11:34.438 ************************************ 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:34.438 ************************************ 00:11:34.438 START TEST dd_flags_misc_forced_aio 00:11:34.438 ************************************ 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:34.438 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:34.438 [2024-12-05 10:55:01.518011] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:34.438 [2024-12-05 10:55:01.518078] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:11:34.695 [2024-12-05 10:55:01.667876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.695 [2024-12-05 10:55:01.720863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.695 [2024-12-05 10:55:01.761712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:34.695  [2024-12-05T10:55:02.113Z] Copying: 512/512 [B] (average 500 kBps) 00:11:34.954 00:11:34.954 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ slvz6981tsytgexznbe3rz45f7wwdacqbhphxbhe4prz6znvl8bmhucik5uj99isxumajajlwtqluusfwi0axyxlt5ai4zo90zyv71kjp12c30xle0kfy51055mda79a6wajhuz78t5y0bktmz3xd61chjhz4f2bhpq94nwpp9qvksvnvg7uat3d7s2yzpzityrlwggysvymgxyp0utm65zsrxct0303qq5zw98wcjt6yattrzdqwaxmmqi42xu9kul6o497ti6203xc5m8dp3edtfvmwlm5qmguwqlf2l8fzax6moh2ka4v2bblm4xu3k0prbqxue7syugx91k1q8cd8mj8guyk5iz88585b9fohxx8q2vyny1v7uzenwhake0dcv6ab0h7e3iyex6mysrg8xac4ykgvlv5mkxhwxyyi3m02i2t5kq2zp4sixrrun9qovtnqnxmqacj5gsxpaqupenxavedc0nq4h6ewedxp4u2fi1f1whjof5cgba8 == \s\l\v\z\6\9\8\1\t\s\y\t\g\e\x\z\n\b\e\3\r\z\4\5\f\7\w\w\d\a\c\q\b\h\p\h\x\b\h\e\4\p\r\z\6\z\n\v\l\8\b\m\h\u\c\i\k\5\u\j\9\9\i\s\x\u\m\a\j\a\j\l\w\t\q\l\u\u\s\f\w\i\0\a\x\y\x\l\t\5\a\i\4\z\o\9\0\z\y\v\7\1\k\j\p\1\2\c\3\0\x\l\e\0\k\f\y\5\1\0\5\5\m\d\a\7\9\a\6\w\a\j\h\u\z\7\8\t\5\y\0\b\k\t\m\z\3\x\d\6\1\c\h\j\h\z\4\f\2\b\h\p\q\9\4\n\w\p\p\9\q\v\k\s\v\n\v\g\7\u\a\t\3\d\7\s\2\y\z\p\z\i\t\y\r\l\w\g\g\y\s\v\y\m\g\x\y\p\0\u\t\m\6\5\z\s\r\x\c\t\0\3\0\3\q\q\5\z\w\9\8\w\c\j\t\6\y\a\t\t\r\z\d\q\w\a\x\m\m\q\i\4\2\x\u\9\k\u\l\6\o\4\9\7\t\i\6\2\0\3\x\c\5\m\8\d\p\3\e\d\t\f\v\m\w\l\m\5\q\m\g\u\w\q\l\f\2\l\8\f\z\a\x\6\m\o\h\2\k\a\4\v\2\b\b\l\m\4\x\u\3\k\0\p\r\b\q\x\u\e\7\s\y\u\g\x\9\1\k\1\q\8\c\d\8\m\j\8\g\u\y\k\5\i\z\8\8\5\8\5\b\9\f\o\h\x\x\8\q\2\v\y\n\y\1\v\7\u\z\e\n\w\h\a\k\e\0\d\c\v\6\a\b\0\h\7\e\3\i\y\e\x\6\m\y\s\r\g\8\x\a\c\4\y\k\g\v\l\v\5\m\k\x\h\w\x\y\y\i\3\m\0\2\i\2\t\5\k\q\2\z\p\4\s\i\x\r\r\u\n\9\q\o\v\t\n\q\n\x\m\q\a\c\j\5\g\s\x\p\a\q\u\p\e\n\x\a\v\e\d\c\0\n\q\4\h\6\e\w\e\d\x\p\4\u\2\f\i\1\f\1\w\h\j\o\f\5\c\g\b\a\8 ]] 00:11:34.954 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:34.954 10:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:34.954 [2024-12-05 10:55:02.023366] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:34.954 [2024-12-05 10:55:02.023442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60593 ] 00:11:35.213 [2024-12-05 10:55:02.173844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.213 [2024-12-05 10:55:02.226316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.213 [2024-12-05 10:55:02.267529] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:35.213  [2024-12-05T10:55:02.632Z] Copying: 512/512 [B] (average 500 kBps) 00:11:35.473 00:11:35.473 10:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ slvz6981tsytgexznbe3rz45f7wwdacqbhphxbhe4prz6znvl8bmhucik5uj99isxumajajlwtqluusfwi0axyxlt5ai4zo90zyv71kjp12c30xle0kfy51055mda79a6wajhuz78t5y0bktmz3xd61chjhz4f2bhpq94nwpp9qvksvnvg7uat3d7s2yzpzityrlwggysvymgxyp0utm65zsrxct0303qq5zw98wcjt6yattrzdqwaxmmqi42xu9kul6o497ti6203xc5m8dp3edtfvmwlm5qmguwqlf2l8fzax6moh2ka4v2bblm4xu3k0prbqxue7syugx91k1q8cd8mj8guyk5iz88585b9fohxx8q2vyny1v7uzenwhake0dcv6ab0h7e3iyex6mysrg8xac4ykgvlv5mkxhwxyyi3m02i2t5kq2zp4sixrrun9qovtnqnxmqacj5gsxpaqupenxavedc0nq4h6ewedxp4u2fi1f1whjof5cgba8 == \s\l\v\z\6\9\8\1\t\s\y\t\g\e\x\z\n\b\e\3\r\z\4\5\f\7\w\w\d\a\c\q\b\h\p\h\x\b\h\e\4\p\r\z\6\z\n\v\l\8\b\m\h\u\c\i\k\5\u\j\9\9\i\s\x\u\m\a\j\a\j\l\w\t\q\l\u\u\s\f\w\i\0\a\x\y\x\l\t\5\a\i\4\z\o\9\0\z\y\v\7\1\k\j\p\1\2\c\3\0\x\l\e\0\k\f\y\5\1\0\5\5\m\d\a\7\9\a\6\w\a\j\h\u\z\7\8\t\5\y\0\b\k\t\m\z\3\x\d\6\1\c\h\j\h\z\4\f\2\b\h\p\q\9\4\n\w\p\p\9\q\v\k\s\v\n\v\g\7\u\a\t\3\d\7\s\2\y\z\p\z\i\t\y\r\l\w\g\g\y\s\v\y\m\g\x\y\p\0\u\t\m\6\5\z\s\r\x\c\t\0\3\0\3\q\q\5\z\w\9\8\w\c\j\t\6\y\a\t\t\r\z\d\q\w\a\x\m\m\q\i\4\2\x\u\9\k\u\l\6\o\4\9\7\t\i\6\2\0\3\x\c\5\m\8\d\p\3\e\d\t\f\v\m\w\l\m\5\q\m\g\u\w\q\l\f\2\l\8\f\z\a\x\6\m\o\h\2\k\a\4\v\2\b\b\l\m\4\x\u\3\k\0\p\r\b\q\x\u\e\7\s\y\u\g\x\9\1\k\1\q\8\c\d\8\m\j\8\g\u\y\k\5\i\z\8\8\5\8\5\b\9\f\o\h\x\x\8\q\2\v\y\n\y\1\v\7\u\z\e\n\w\h\a\k\e\0\d\c\v\6\a\b\0\h\7\e\3\i\y\e\x\6\m\y\s\r\g\8\x\a\c\4\y\k\g\v\l\v\5\m\k\x\h\w\x\y\y\i\3\m\0\2\i\2\t\5\k\q\2\z\p\4\s\i\x\r\r\u\n\9\q\o\v\t\n\q\n\x\m\q\a\c\j\5\g\s\x\p\a\q\u\p\e\n\x\a\v\e\d\c\0\n\q\4\h\6\e\w\e\d\x\p\4\u\2\f\i\1\f\1\w\h\j\o\f\5\c\g\b\a\8 ]] 00:11:35.473 10:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:35.473 10:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:35.473 [2024-12-05 10:55:02.527489] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:35.473 [2024-12-05 10:55:02.527562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60595 ] 00:11:35.732 [2024-12-05 10:55:02.683397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.732 [2024-12-05 10:55:02.738772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.732 [2024-12-05 10:55:02.780918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:35.732  [2024-12-05T10:55:03.150Z] Copying: 512/512 [B] (average 166 kBps) 00:11:35.991 00:11:35.991 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ slvz6981tsytgexznbe3rz45f7wwdacqbhphxbhe4prz6znvl8bmhucik5uj99isxumajajlwtqluusfwi0axyxlt5ai4zo90zyv71kjp12c30xle0kfy51055mda79a6wajhuz78t5y0bktmz3xd61chjhz4f2bhpq94nwpp9qvksvnvg7uat3d7s2yzpzityrlwggysvymgxyp0utm65zsrxct0303qq5zw98wcjt6yattrzdqwaxmmqi42xu9kul6o497ti6203xc5m8dp3edtfvmwlm5qmguwqlf2l8fzax6moh2ka4v2bblm4xu3k0prbqxue7syugx91k1q8cd8mj8guyk5iz88585b9fohxx8q2vyny1v7uzenwhake0dcv6ab0h7e3iyex6mysrg8xac4ykgvlv5mkxhwxyyi3m02i2t5kq2zp4sixrrun9qovtnqnxmqacj5gsxpaqupenxavedc0nq4h6ewedxp4u2fi1f1whjof5cgba8 == \s\l\v\z\6\9\8\1\t\s\y\t\g\e\x\z\n\b\e\3\r\z\4\5\f\7\w\w\d\a\c\q\b\h\p\h\x\b\h\e\4\p\r\z\6\z\n\v\l\8\b\m\h\u\c\i\k\5\u\j\9\9\i\s\x\u\m\a\j\a\j\l\w\t\q\l\u\u\s\f\w\i\0\a\x\y\x\l\t\5\a\i\4\z\o\9\0\z\y\v\7\1\k\j\p\1\2\c\3\0\x\l\e\0\k\f\y\5\1\0\5\5\m\d\a\7\9\a\6\w\a\j\h\u\z\7\8\t\5\y\0\b\k\t\m\z\3\x\d\6\1\c\h\j\h\z\4\f\2\b\h\p\q\9\4\n\w\p\p\9\q\v\k\s\v\n\v\g\7\u\a\t\3\d\7\s\2\y\z\p\z\i\t\y\r\l\w\g\g\y\s\v\y\m\g\x\y\p\0\u\t\m\6\5\z\s\r\x\c\t\0\3\0\3\q\q\5\z\w\9\8\w\c\j\t\6\y\a\t\t\r\z\d\q\w\a\x\m\m\q\i\4\2\x\u\9\k\u\l\6\o\4\9\7\t\i\6\2\0\3\x\c\5\m\8\d\p\3\e\d\t\f\v\m\w\l\m\5\q\m\g\u\w\q\l\f\2\l\8\f\z\a\x\6\m\o\h\2\k\a\4\v\2\b\b\l\m\4\x\u\3\k\0\p\r\b\q\x\u\e\7\s\y\u\g\x\9\1\k\1\q\8\c\d\8\m\j\8\g\u\y\k\5\i\z\8\8\5\8\5\b\9\f\o\h\x\x\8\q\2\v\y\n\y\1\v\7\u\z\e\n\w\h\a\k\e\0\d\c\v\6\a\b\0\h\7\e\3\i\y\e\x\6\m\y\s\r\g\8\x\a\c\4\y\k\g\v\l\v\5\m\k\x\h\w\x\y\y\i\3\m\0\2\i\2\t\5\k\q\2\z\p\4\s\i\x\r\r\u\n\9\q\o\v\t\n\q\n\x\m\q\a\c\j\5\g\s\x\p\a\q\u\p\e\n\x\a\v\e\d\c\0\n\q\4\h\6\e\w\e\d\x\p\4\u\2\f\i\1\f\1\w\h\j\o\f\5\c\g\b\a\8 ]] 00:11:35.991 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:35.991 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:35.991 [2024-12-05 10:55:03.055345] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:35.991 [2024-12-05 10:55:03.055444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60608 ] 00:11:36.251 [2024-12-05 10:55:03.213154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.251 [2024-12-05 10:55:03.269791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.251 [2024-12-05 10:55:03.310830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:36.251  [2024-12-05T10:55:03.669Z] Copying: 512/512 [B] (average 500 kBps) 00:11:36.510 00:11:36.510 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ slvz6981tsytgexznbe3rz45f7wwdacqbhphxbhe4prz6znvl8bmhucik5uj99isxumajajlwtqluusfwi0axyxlt5ai4zo90zyv71kjp12c30xle0kfy51055mda79a6wajhuz78t5y0bktmz3xd61chjhz4f2bhpq94nwpp9qvksvnvg7uat3d7s2yzpzityrlwggysvymgxyp0utm65zsrxct0303qq5zw98wcjt6yattrzdqwaxmmqi42xu9kul6o497ti6203xc5m8dp3edtfvmwlm5qmguwqlf2l8fzax6moh2ka4v2bblm4xu3k0prbqxue7syugx91k1q8cd8mj8guyk5iz88585b9fohxx8q2vyny1v7uzenwhake0dcv6ab0h7e3iyex6mysrg8xac4ykgvlv5mkxhwxyyi3m02i2t5kq2zp4sixrrun9qovtnqnxmqacj5gsxpaqupenxavedc0nq4h6ewedxp4u2fi1f1whjof5cgba8 == \s\l\v\z\6\9\8\1\t\s\y\t\g\e\x\z\n\b\e\3\r\z\4\5\f\7\w\w\d\a\c\q\b\h\p\h\x\b\h\e\4\p\r\z\6\z\n\v\l\8\b\m\h\u\c\i\k\5\u\j\9\9\i\s\x\u\m\a\j\a\j\l\w\t\q\l\u\u\s\f\w\i\0\a\x\y\x\l\t\5\a\i\4\z\o\9\0\z\y\v\7\1\k\j\p\1\2\c\3\0\x\l\e\0\k\f\y\5\1\0\5\5\m\d\a\7\9\a\6\w\a\j\h\u\z\7\8\t\5\y\0\b\k\t\m\z\3\x\d\6\1\c\h\j\h\z\4\f\2\b\h\p\q\9\4\n\w\p\p\9\q\v\k\s\v\n\v\g\7\u\a\t\3\d\7\s\2\y\z\p\z\i\t\y\r\l\w\g\g\y\s\v\y\m\g\x\y\p\0\u\t\m\6\5\z\s\r\x\c\t\0\3\0\3\q\q\5\z\w\9\8\w\c\j\t\6\y\a\t\t\r\z\d\q\w\a\x\m\m\q\i\4\2\x\u\9\k\u\l\6\o\4\9\7\t\i\6\2\0\3\x\c\5\m\8\d\p\3\e\d\t\f\v\m\w\l\m\5\q\m\g\u\w\q\l\f\2\l\8\f\z\a\x\6\m\o\h\2\k\a\4\v\2\b\b\l\m\4\x\u\3\k\0\p\r\b\q\x\u\e\7\s\y\u\g\x\9\1\k\1\q\8\c\d\8\m\j\8\g\u\y\k\5\i\z\8\8\5\8\5\b\9\f\o\h\x\x\8\q\2\v\y\n\y\1\v\7\u\z\e\n\w\h\a\k\e\0\d\c\v\6\a\b\0\h\7\e\3\i\y\e\x\6\m\y\s\r\g\8\x\a\c\4\y\k\g\v\l\v\5\m\k\x\h\w\x\y\y\i\3\m\0\2\i\2\t\5\k\q\2\z\p\4\s\i\x\r\r\u\n\9\q\o\v\t\n\q\n\x\m\q\a\c\j\5\g\s\x\p\a\q\u\p\e\n\x\a\v\e\d\c\0\n\q\4\h\6\e\w\e\d\x\p\4\u\2\f\i\1\f\1\w\h\j\o\f\5\c\g\b\a\8 ]] 00:11:36.510 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:11:36.510 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:11:36.510 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:11:36.510 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:36.510 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:36.510 10:55:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:11:36.510 [2024-12-05 10:55:03.583829] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:36.510 [2024-12-05 10:55:03.583899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:11:36.769 [2024-12-05 10:55:03.738396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.769 [2024-12-05 10:55:03.792659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.769 [2024-12-05 10:55:03.834770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:36.769  [2024-12-05T10:55:04.186Z] Copying: 512/512 [B] (average 500 kBps) 00:11:37.027 00:11:37.027 10:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mvedsf9xtskh8e180lbbp9zhub4zkaegwx6bhm7l9faa2lypsb2x2zkbnnkq31qxvzqyrcqmn2qvg692690gzamw1whdjhw275cuqifolgqnfbp8wzsfl4l10bga8qp8ih97inwkch4j66quu3bcw863eutipwq2t2dh2t6id5p51eb313qspbqqf64kvdib2hz7xzj8edvvrkazlt2vynff92yrfz6zb5toq7om59r2knkg2grqj9ghi04fb2cq8a088pc0t2v32hwrj1j49uplq0ijgl3ll65a65pe1jvfomgk1vw1cjh52q1hevdcr3k9mrsr0866jx0mcnffkw7nlbqpf040vcydieq64bd553wa0qzmph0h7beuk3gxb5uxumf9nl416b20s96854nfbdub0k4685hctd5bvsoqw32mrffgotzt8chj68uxsklazsx7vrpeh9ideb0rmev3wnbvj70zj9l93nbj8f1tayvn3e208tx23rolnld9 == \m\v\e\d\s\f\9\x\t\s\k\h\8\e\1\8\0\l\b\b\p\9\z\h\u\b\4\z\k\a\e\g\w\x\6\b\h\m\7\l\9\f\a\a\2\l\y\p\s\b\2\x\2\z\k\b\n\n\k\q\3\1\q\x\v\z\q\y\r\c\q\m\n\2\q\v\g\6\9\2\6\9\0\g\z\a\m\w\1\w\h\d\j\h\w\2\7\5\c\u\q\i\f\o\l\g\q\n\f\b\p\8\w\z\s\f\l\4\l\1\0\b\g\a\8\q\p\8\i\h\9\7\i\n\w\k\c\h\4\j\6\6\q\u\u\3\b\c\w\8\6\3\e\u\t\i\p\w\q\2\t\2\d\h\2\t\6\i\d\5\p\5\1\e\b\3\1\3\q\s\p\b\q\q\f\6\4\k\v\d\i\b\2\h\z\7\x\z\j\8\e\d\v\v\r\k\a\z\l\t\2\v\y\n\f\f\9\2\y\r\f\z\6\z\b\5\t\o\q\7\o\m\5\9\r\2\k\n\k\g\2\g\r\q\j\9\g\h\i\0\4\f\b\2\c\q\8\a\0\8\8\p\c\0\t\2\v\3\2\h\w\r\j\1\j\4\9\u\p\l\q\0\i\j\g\l\3\l\l\6\5\a\6\5\p\e\1\j\v\f\o\m\g\k\1\v\w\1\c\j\h\5\2\q\1\h\e\v\d\c\r\3\k\9\m\r\s\r\0\8\6\6\j\x\0\m\c\n\f\f\k\w\7\n\l\b\q\p\f\0\4\0\v\c\y\d\i\e\q\6\4\b\d\5\5\3\w\a\0\q\z\m\p\h\0\h\7\b\e\u\k\3\g\x\b\5\u\x\u\m\f\9\n\l\4\1\6\b\2\0\s\9\6\8\5\4\n\f\b\d\u\b\0\k\4\6\8\5\h\c\t\d\5\b\v\s\o\q\w\3\2\m\r\f\f\g\o\t\z\t\8\c\h\j\6\8\u\x\s\k\l\a\z\s\x\7\v\r\p\e\h\9\i\d\e\b\0\r\m\e\v\3\w\n\b\v\j\7\0\z\j\9\l\9\3\n\b\j\8\f\1\t\a\y\v\n\3\e\2\0\8\t\x\2\3\r\o\l\n\l\d\9 ]] 00:11:37.027 10:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:37.027 10:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:11:37.027 [2024-12-05 10:55:04.093115] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:37.027 [2024-12-05 10:55:04.093205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:11:37.286 [2024-12-05 10:55:04.249018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.286 [2024-12-05 10:55:04.302841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.286 [2024-12-05 10:55:04.344947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.286  [2024-12-05T10:55:04.704Z] Copying: 512/512 [B] (average 500 kBps) 00:11:37.545 00:11:37.545 10:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mvedsf9xtskh8e180lbbp9zhub4zkaegwx6bhm7l9faa2lypsb2x2zkbnnkq31qxvzqyrcqmn2qvg692690gzamw1whdjhw275cuqifolgqnfbp8wzsfl4l10bga8qp8ih97inwkch4j66quu3bcw863eutipwq2t2dh2t6id5p51eb313qspbqqf64kvdib2hz7xzj8edvvrkazlt2vynff92yrfz6zb5toq7om59r2knkg2grqj9ghi04fb2cq8a088pc0t2v32hwrj1j49uplq0ijgl3ll65a65pe1jvfomgk1vw1cjh52q1hevdcr3k9mrsr0866jx0mcnffkw7nlbqpf040vcydieq64bd553wa0qzmph0h7beuk3gxb5uxumf9nl416b20s96854nfbdub0k4685hctd5bvsoqw32mrffgotzt8chj68uxsklazsx7vrpeh9ideb0rmev3wnbvj70zj9l93nbj8f1tayvn3e208tx23rolnld9 == \m\v\e\d\s\f\9\x\t\s\k\h\8\e\1\8\0\l\b\b\p\9\z\h\u\b\4\z\k\a\e\g\w\x\6\b\h\m\7\l\9\f\a\a\2\l\y\p\s\b\2\x\2\z\k\b\n\n\k\q\3\1\q\x\v\z\q\y\r\c\q\m\n\2\q\v\g\6\9\2\6\9\0\g\z\a\m\w\1\w\h\d\j\h\w\2\7\5\c\u\q\i\f\o\l\g\q\n\f\b\p\8\w\z\s\f\l\4\l\1\0\b\g\a\8\q\p\8\i\h\9\7\i\n\w\k\c\h\4\j\6\6\q\u\u\3\b\c\w\8\6\3\e\u\t\i\p\w\q\2\t\2\d\h\2\t\6\i\d\5\p\5\1\e\b\3\1\3\q\s\p\b\q\q\f\6\4\k\v\d\i\b\2\h\z\7\x\z\j\8\e\d\v\v\r\k\a\z\l\t\2\v\y\n\f\f\9\2\y\r\f\z\6\z\b\5\t\o\q\7\o\m\5\9\r\2\k\n\k\g\2\g\r\q\j\9\g\h\i\0\4\f\b\2\c\q\8\a\0\8\8\p\c\0\t\2\v\3\2\h\w\r\j\1\j\4\9\u\p\l\q\0\i\j\g\l\3\l\l\6\5\a\6\5\p\e\1\j\v\f\o\m\g\k\1\v\w\1\c\j\h\5\2\q\1\h\e\v\d\c\r\3\k\9\m\r\s\r\0\8\6\6\j\x\0\m\c\n\f\f\k\w\7\n\l\b\q\p\f\0\4\0\v\c\y\d\i\e\q\6\4\b\d\5\5\3\w\a\0\q\z\m\p\h\0\h\7\b\e\u\k\3\g\x\b\5\u\x\u\m\f\9\n\l\4\1\6\b\2\0\s\9\6\8\5\4\n\f\b\d\u\b\0\k\4\6\8\5\h\c\t\d\5\b\v\s\o\q\w\3\2\m\r\f\f\g\o\t\z\t\8\c\h\j\6\8\u\x\s\k\l\a\z\s\x\7\v\r\p\e\h\9\i\d\e\b\0\r\m\e\v\3\w\n\b\v\j\7\0\z\j\9\l\9\3\n\b\j\8\f\1\t\a\y\v\n\3\e\2\0\8\t\x\2\3\r\o\l\n\l\d\9 ]] 00:11:37.545 10:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:37.545 10:55:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:11:37.545 [2024-12-05 10:55:04.608671] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:37.545 [2024-12-05 10:55:04.608742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60625 ] 00:11:37.805 [2024-12-05 10:55:04.757908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.805 [2024-12-05 10:55:04.809759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.805 [2024-12-05 10:55:04.850699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:37.805  [2024-12-05T10:55:05.312Z] Copying: 512/512 [B] (average 500 kBps) 00:11:38.153 00:11:38.153 10:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mvedsf9xtskh8e180lbbp9zhub4zkaegwx6bhm7l9faa2lypsb2x2zkbnnkq31qxvzqyrcqmn2qvg692690gzamw1whdjhw275cuqifolgqnfbp8wzsfl4l10bga8qp8ih97inwkch4j66quu3bcw863eutipwq2t2dh2t6id5p51eb313qspbqqf64kvdib2hz7xzj8edvvrkazlt2vynff92yrfz6zb5toq7om59r2knkg2grqj9ghi04fb2cq8a088pc0t2v32hwrj1j49uplq0ijgl3ll65a65pe1jvfomgk1vw1cjh52q1hevdcr3k9mrsr0866jx0mcnffkw7nlbqpf040vcydieq64bd553wa0qzmph0h7beuk3gxb5uxumf9nl416b20s96854nfbdub0k4685hctd5bvsoqw32mrffgotzt8chj68uxsklazsx7vrpeh9ideb0rmev3wnbvj70zj9l93nbj8f1tayvn3e208tx23rolnld9 == \m\v\e\d\s\f\9\x\t\s\k\h\8\e\1\8\0\l\b\b\p\9\z\h\u\b\4\z\k\a\e\g\w\x\6\b\h\m\7\l\9\f\a\a\2\l\y\p\s\b\2\x\2\z\k\b\n\n\k\q\3\1\q\x\v\z\q\y\r\c\q\m\n\2\q\v\g\6\9\2\6\9\0\g\z\a\m\w\1\w\h\d\j\h\w\2\7\5\c\u\q\i\f\o\l\g\q\n\f\b\p\8\w\z\s\f\l\4\l\1\0\b\g\a\8\q\p\8\i\h\9\7\i\n\w\k\c\h\4\j\6\6\q\u\u\3\b\c\w\8\6\3\e\u\t\i\p\w\q\2\t\2\d\h\2\t\6\i\d\5\p\5\1\e\b\3\1\3\q\s\p\b\q\q\f\6\4\k\v\d\i\b\2\h\z\7\x\z\j\8\e\d\v\v\r\k\a\z\l\t\2\v\y\n\f\f\9\2\y\r\f\z\6\z\b\5\t\o\q\7\o\m\5\9\r\2\k\n\k\g\2\g\r\q\j\9\g\h\i\0\4\f\b\2\c\q\8\a\0\8\8\p\c\0\t\2\v\3\2\h\w\r\j\1\j\4\9\u\p\l\q\0\i\j\g\l\3\l\l\6\5\a\6\5\p\e\1\j\v\f\o\m\g\k\1\v\w\1\c\j\h\5\2\q\1\h\e\v\d\c\r\3\k\9\m\r\s\r\0\8\6\6\j\x\0\m\c\n\f\f\k\w\7\n\l\b\q\p\f\0\4\0\v\c\y\d\i\e\q\6\4\b\d\5\5\3\w\a\0\q\z\m\p\h\0\h\7\b\e\u\k\3\g\x\b\5\u\x\u\m\f\9\n\l\4\1\6\b\2\0\s\9\6\8\5\4\n\f\b\d\u\b\0\k\4\6\8\5\h\c\t\d\5\b\v\s\o\q\w\3\2\m\r\f\f\g\o\t\z\t\8\c\h\j\6\8\u\x\s\k\l\a\z\s\x\7\v\r\p\e\h\9\i\d\e\b\0\r\m\e\v\3\w\n\b\v\j\7\0\z\j\9\l\9\3\n\b\j\8\f\1\t\a\y\v\n\3\e\2\0\8\t\x\2\3\r\o\l\n\l\d\9 ]] 00:11:38.153 10:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:11:38.153 10:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:11:38.153 [2024-12-05 10:55:05.106498] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:38.153 [2024-12-05 10:55:05.106576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60638 ] 00:11:38.153 [2024-12-05 10:55:05.255455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.411 [2024-12-05 10:55:05.308601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.411 [2024-12-05 10:55:05.349688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:38.411  [2024-12-05T10:55:05.570Z] Copying: 512/512 [B] (average 500 kBps) 00:11:38.411 00:11:38.411 10:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ mvedsf9xtskh8e180lbbp9zhub4zkaegwx6bhm7l9faa2lypsb2x2zkbnnkq31qxvzqyrcqmn2qvg692690gzamw1whdjhw275cuqifolgqnfbp8wzsfl4l10bga8qp8ih97inwkch4j66quu3bcw863eutipwq2t2dh2t6id5p51eb313qspbqqf64kvdib2hz7xzj8edvvrkazlt2vynff92yrfz6zb5toq7om59r2knkg2grqj9ghi04fb2cq8a088pc0t2v32hwrj1j49uplq0ijgl3ll65a65pe1jvfomgk1vw1cjh52q1hevdcr3k9mrsr0866jx0mcnffkw7nlbqpf040vcydieq64bd553wa0qzmph0h7beuk3gxb5uxumf9nl416b20s96854nfbdub0k4685hctd5bvsoqw32mrffgotzt8chj68uxsklazsx7vrpeh9ideb0rmev3wnbvj70zj9l93nbj8f1tayvn3e208tx23rolnld9 == \m\v\e\d\s\f\9\x\t\s\k\h\8\e\1\8\0\l\b\b\p\9\z\h\u\b\4\z\k\a\e\g\w\x\6\b\h\m\7\l\9\f\a\a\2\l\y\p\s\b\2\x\2\z\k\b\n\n\k\q\3\1\q\x\v\z\q\y\r\c\q\m\n\2\q\v\g\6\9\2\6\9\0\g\z\a\m\w\1\w\h\d\j\h\w\2\7\5\c\u\q\i\f\o\l\g\q\n\f\b\p\8\w\z\s\f\l\4\l\1\0\b\g\a\8\q\p\8\i\h\9\7\i\n\w\k\c\h\4\j\6\6\q\u\u\3\b\c\w\8\6\3\e\u\t\i\p\w\q\2\t\2\d\h\2\t\6\i\d\5\p\5\1\e\b\3\1\3\q\s\p\b\q\q\f\6\4\k\v\d\i\b\2\h\z\7\x\z\j\8\e\d\v\v\r\k\a\z\l\t\2\v\y\n\f\f\9\2\y\r\f\z\6\z\b\5\t\o\q\7\o\m\5\9\r\2\k\n\k\g\2\g\r\q\j\9\g\h\i\0\4\f\b\2\c\q\8\a\0\8\8\p\c\0\t\2\v\3\2\h\w\r\j\1\j\4\9\u\p\l\q\0\i\j\g\l\3\l\l\6\5\a\6\5\p\e\1\j\v\f\o\m\g\k\1\v\w\1\c\j\h\5\2\q\1\h\e\v\d\c\r\3\k\9\m\r\s\r\0\8\6\6\j\x\0\m\c\n\f\f\k\w\7\n\l\b\q\p\f\0\4\0\v\c\y\d\i\e\q\6\4\b\d\5\5\3\w\a\0\q\z\m\p\h\0\h\7\b\e\u\k\3\g\x\b\5\u\x\u\m\f\9\n\l\4\1\6\b\2\0\s\9\6\8\5\4\n\f\b\d\u\b\0\k\4\6\8\5\h\c\t\d\5\b\v\s\o\q\w\3\2\m\r\f\f\g\o\t\z\t\8\c\h\j\6\8\u\x\s\k\l\a\z\s\x\7\v\r\p\e\h\9\i\d\e\b\0\r\m\e\v\3\w\n\b\v\j\7\0\z\j\9\l\9\3\n\b\j\8\f\1\t\a\y\v\n\3\e\2\0\8\t\x\2\3\r\o\l\n\l\d\9 ]] 00:11:38.411 00:11:38.411 real 0m4.106s 00:11:38.411 user 0m2.141s 00:11:38.411 sys 0m0.984s 00:11:38.411 10:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.411 ************************************ 00:11:38.411 END TEST dd_flags_misc_forced_aio 00:11:38.411 ************************************ 00:11:38.411 10:55:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:11:38.670 10:55:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:11:38.670 10:55:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:11:38.670 10:55:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:11:38.670 00:11:38.670 real 0m19.244s 00:11:38.670 user 0m8.927s 00:11:38.670 sys 0m6.044s 00:11:38.670 10:55:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.670 10:55:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:11:38.670 ************************************ 00:11:38.670 END TEST spdk_dd_posix 00:11:38.670 ************************************ 00:11:38.670 10:55:05 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:38.670 10:55:05 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.670 10:55:05 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.670 10:55:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:38.670 ************************************ 00:11:38.670 START TEST spdk_dd_malloc 00:11:38.670 ************************************ 00:11:38.670 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:11:38.670 * Looking for test storage... 00:11:38.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:38.670 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.929 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.930 --rc genhtml_branch_coverage=1 00:11:38.930 --rc genhtml_function_coverage=1 00:11:38.930 --rc genhtml_legend=1 00:11:38.930 --rc geninfo_all_blocks=1 00:11:38.930 --rc geninfo_unexecuted_blocks=1 00:11:38.930 00:11:38.930 ' 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.930 --rc genhtml_branch_coverage=1 00:11:38.930 --rc genhtml_function_coverage=1 00:11:38.930 --rc genhtml_legend=1 00:11:38.930 --rc geninfo_all_blocks=1 00:11:38.930 --rc geninfo_unexecuted_blocks=1 00:11:38.930 00:11:38.930 ' 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.930 --rc genhtml_branch_coverage=1 00:11:38.930 --rc genhtml_function_coverage=1 00:11:38.930 --rc genhtml_legend=1 00:11:38.930 --rc geninfo_all_blocks=1 00:11:38.930 --rc geninfo_unexecuted_blocks=1 00:11:38.930 00:11:38.930 ' 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.930 --rc genhtml_branch_coverage=1 00:11:38.930 --rc genhtml_function_coverage=1 00:11:38.930 --rc genhtml_legend=1 00:11:38.930 --rc geninfo_all_blocks=1 00:11:38.930 --rc geninfo_unexecuted_blocks=1 00:11:38.930 00:11:38.930 ' 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:38.930 ************************************ 00:11:38.930 START TEST dd_malloc_copy 00:11:38.930 ************************************ 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:38.930 10:55:05 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:38.930 [2024-12-05 10:55:06.001253] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:38.930 [2024-12-05 10:55:06.001356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60720 ] 00:11:38.930 { 00:11:38.930 "subsystems": [ 00:11:38.930 { 00:11:38.930 "subsystem": "bdev", 00:11:38.930 "config": [ 00:11:38.930 { 00:11:38.930 "params": { 00:11:38.930 "block_size": 512, 00:11:38.930 "num_blocks": 1048576, 00:11:38.930 "name": "malloc0" 00:11:38.930 }, 00:11:38.930 "method": "bdev_malloc_create" 00:11:38.930 }, 00:11:38.930 { 00:11:38.930 "params": { 00:11:38.930 "block_size": 512, 00:11:38.930 "num_blocks": 1048576, 00:11:38.930 "name": "malloc1" 00:11:38.930 }, 00:11:38.930 "method": "bdev_malloc_create" 00:11:38.930 }, 00:11:38.930 { 00:11:38.930 "method": "bdev_wait_for_examine" 00:11:38.930 } 00:11:38.930 ] 00:11:38.930 } 00:11:38.930 ] 00:11:38.930 } 00:11:39.189 [2024-12-05 10:55:06.150242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.189 [2024-12-05 10:55:06.201591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.189 [2024-12-05 10:55:06.243085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:40.565  [2024-12-05T10:55:08.662Z] Copying: 254/512 [MB] (254 MBps) [2024-12-05T10:55:08.662Z] Copying: 504/512 [MB] (250 MBps) [2024-12-05T10:55:09.230Z] Copying: 512/512 [MB] (average 252 MBps) 00:11:42.071 00:11:42.071 10:55:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:11:42.071 10:55:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:11:42.071 10:55:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:42.071 10:55:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:42.071 [2024-12-05 10:55:09.056104] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:42.071 [2024-12-05 10:55:09.056189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60762 ] 00:11:42.071 { 00:11:42.071 "subsystems": [ 00:11:42.071 { 00:11:42.071 "subsystem": "bdev", 00:11:42.071 "config": [ 00:11:42.071 { 00:11:42.071 "params": { 00:11:42.071 "block_size": 512, 00:11:42.071 "num_blocks": 1048576, 00:11:42.071 "name": "malloc0" 00:11:42.071 }, 00:11:42.071 "method": "bdev_malloc_create" 00:11:42.071 }, 00:11:42.071 { 00:11:42.071 "params": { 00:11:42.071 "block_size": 512, 00:11:42.071 "num_blocks": 1048576, 00:11:42.071 "name": "malloc1" 00:11:42.071 }, 00:11:42.071 "method": "bdev_malloc_create" 00:11:42.071 }, 00:11:42.071 { 00:11:42.071 "method": "bdev_wait_for_examine" 00:11:42.071 } 00:11:42.071 ] 00:11:42.071 } 00:11:42.071 ] 00:11:42.071 } 00:11:42.071 [2024-12-05 10:55:09.208540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.330 [2024-12-05 10:55:09.255974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.330 [2024-12-05 10:55:09.298214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:43.706  [2024-12-05T10:55:11.845Z] Copying: 254/512 [MB] (254 MBps) [2024-12-05T10:55:11.845Z] Copying: 507/512 [MB] (253 MBps) [2024-12-05T10:55:12.127Z] Copying: 512/512 [MB] (average 254 MBps) 00:11:44.968 00:11:44.968 00:11:44.968 real 0m6.103s 00:11:44.968 user 0m5.276s 00:11:44.968 sys 0m0.684s 00:11:44.968 10:55:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.968 10:55:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 ************************************ 00:11:44.968 END TEST dd_malloc_copy 00:11:44.968 ************************************ 00:11:44.968 00:11:44.968 real 0m6.415s 00:11:44.968 user 0m5.429s 00:11:44.968 sys 0m0.853s 00:11:44.968 10:55:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.968 10:55:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:11:44.968 ************************************ 00:11:44.968 END TEST spdk_dd_malloc 00:11:44.968 ************************************ 00:11:45.228 10:55:12 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:45.228 10:55:12 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.228 10:55:12 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.228 10:55:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:45.229 ************************************ 00:11:45.229 START TEST spdk_dd_bdev_to_bdev 00:11:45.229 ************************************ 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:11:45.229 * Looking for test storage... 00:11:45.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.229 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:45.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.487 --rc genhtml_branch_coverage=1 00:11:45.487 --rc genhtml_function_coverage=1 00:11:45.487 --rc genhtml_legend=1 00:11:45.487 --rc geninfo_all_blocks=1 00:11:45.487 --rc geninfo_unexecuted_blocks=1 00:11:45.487 00:11:45.487 ' 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:45.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.487 --rc genhtml_branch_coverage=1 00:11:45.487 --rc genhtml_function_coverage=1 00:11:45.487 --rc genhtml_legend=1 00:11:45.487 --rc geninfo_all_blocks=1 00:11:45.487 --rc geninfo_unexecuted_blocks=1 00:11:45.487 00:11:45.487 ' 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:45.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.487 --rc genhtml_branch_coverage=1 00:11:45.487 --rc genhtml_function_coverage=1 00:11:45.487 --rc genhtml_legend=1 00:11:45.487 --rc geninfo_all_blocks=1 00:11:45.487 --rc geninfo_unexecuted_blocks=1 00:11:45.487 00:11:45.487 ' 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:45.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.487 --rc genhtml_branch_coverage=1 00:11:45.487 --rc genhtml_function_coverage=1 00:11:45.487 --rc genhtml_legend=1 00:11:45.487 --rc geninfo_all_blocks=1 00:11:45.487 --rc geninfo_unexecuted_blocks=1 00:11:45.487 00:11:45.487 ' 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:45.487 ************************************ 00:11:45.487 START TEST dd_inflate_file 00:11:45.487 ************************************ 00:11:45.487 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:11:45.487 [2024-12-05 10:55:12.463744] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:45.487 [2024-12-05 10:55:12.463821] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60869 ] 00:11:45.487 [2024-12-05 10:55:12.615847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.750 [2024-12-05 10:55:12.669210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.750 [2024-12-05 10:55:12.710498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:45.750  [2024-12-05T10:55:13.168Z] Copying: 64/64 [MB] (average 1600 MBps) 00:11:46.009 00:11:46.009 00:11:46.009 real 0m0.521s 00:11:46.009 user 0m0.298s 00:11:46.009 sys 0m0.269s 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 ************************************ 00:11:46.009 END TEST dd_inflate_file 00:11:46.009 ************************************ 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 10:55:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:46.009 ************************************ 00:11:46.009 START TEST dd_copy_to_out_bdev 00:11:46.009 ************************************ 00:11:46.009 10:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:11:46.009 { 00:11:46.009 "subsystems": [ 00:11:46.009 { 00:11:46.009 "subsystem": "bdev", 00:11:46.009 "config": [ 00:11:46.009 { 00:11:46.009 "params": { 00:11:46.009 "trtype": "pcie", 00:11:46.009 "traddr": "0000:00:10.0", 00:11:46.009 "name": "Nvme0" 00:11:46.009 }, 00:11:46.009 "method": "bdev_nvme_attach_controller" 00:11:46.009 }, 00:11:46.009 { 00:11:46.009 "params": { 00:11:46.009 "trtype": "pcie", 00:11:46.009 "traddr": "0000:00:11.0", 00:11:46.009 "name": "Nvme1" 00:11:46.009 }, 00:11:46.009 "method": "bdev_nvme_attach_controller" 00:11:46.009 }, 00:11:46.009 { 00:11:46.009 "method": "bdev_wait_for_examine" 00:11:46.009 } 00:11:46.009 ] 00:11:46.009 } 00:11:46.009 ] 00:11:46.009 } 00:11:46.009 [2024-12-05 10:55:13.062087] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:46.009 [2024-12-05 10:55:13.062168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60903 ] 00:11:46.267 [2024-12-05 10:55:13.212821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.267 [2024-12-05 10:55:13.266694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.267 [2024-12-05 10:55:13.308580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:47.209  [2024-12-05T10:55:14.627Z] Copying: 64/64 [MB] (average 71 MBps) 00:11:47.468 00:11:47.468 00:11:47.468 real 0m1.565s 00:11:47.468 user 0m1.355s 00:11:47.468 sys 0m1.207s 00:11:47.468 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.468 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:47.468 ************************************ 00:11:47.468 END TEST dd_copy_to_out_bdev 00:11:47.468 ************************************ 00:11:47.726 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:11:47.726 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:11:47.726 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.726 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.726 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:47.726 ************************************ 00:11:47.726 START TEST dd_offset_magic 00:11:47.726 ************************************ 00:11:47.726 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:11:47.726 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:11:47.726 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:11:47.727 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:11:47.727 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:47.727 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:11:47.727 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:47.727 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:47.727 10:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:47.727 [2024-12-05 10:55:14.707800] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:47.727 [2024-12-05 10:55:14.707876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60942 ] 00:11:47.727 { 00:11:47.727 "subsystems": [ 00:11:47.727 { 00:11:47.727 "subsystem": "bdev", 00:11:47.727 "config": [ 00:11:47.727 { 00:11:47.727 "params": { 00:11:47.727 "trtype": "pcie", 00:11:47.727 "traddr": "0000:00:10.0", 00:11:47.727 "name": "Nvme0" 00:11:47.727 }, 00:11:47.727 "method": "bdev_nvme_attach_controller" 00:11:47.727 }, 00:11:47.727 { 00:11:47.727 "params": { 00:11:47.727 "trtype": "pcie", 00:11:47.727 "traddr": "0000:00:11.0", 00:11:47.727 "name": "Nvme1" 00:11:47.727 }, 00:11:47.727 "method": "bdev_nvme_attach_controller" 00:11:47.727 }, 00:11:47.727 { 00:11:47.727 "method": "bdev_wait_for_examine" 00:11:47.727 } 00:11:47.727 ] 00:11:47.727 } 00:11:47.727 ] 00:11:47.727 } 00:11:47.727 [2024-12-05 10:55:14.860576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.986 [2024-12-05 10:55:14.910070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.986 [2024-12-05 10:55:14.951636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:48.254  [2024-12-05T10:55:15.413Z] Copying: 65/65 [MB] (average 764 MBps) 00:11:48.254 00:11:48.520 10:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:11:48.520 10:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:48.520 10:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:48.520 10:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:48.521 [2024-12-05 10:55:15.465175] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:48.521 [2024-12-05 10:55:15.465247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60962 ] 00:11:48.521 { 00:11:48.521 "subsystems": [ 00:11:48.521 { 00:11:48.521 "subsystem": "bdev", 00:11:48.521 "config": [ 00:11:48.521 { 00:11:48.521 "params": { 00:11:48.521 "trtype": "pcie", 00:11:48.521 "traddr": "0000:00:10.0", 00:11:48.521 "name": "Nvme0" 00:11:48.521 }, 00:11:48.521 "method": "bdev_nvme_attach_controller" 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "params": { 00:11:48.521 "trtype": "pcie", 00:11:48.521 "traddr": "0000:00:11.0", 00:11:48.521 "name": "Nvme1" 00:11:48.521 }, 00:11:48.521 "method": "bdev_nvme_attach_controller" 00:11:48.521 }, 00:11:48.521 { 00:11:48.521 "method": "bdev_wait_for_examine" 00:11:48.521 } 00:11:48.521 ] 00:11:48.521 } 00:11:48.521 ] 00:11:48.521 } 00:11:48.521 [2024-12-05 10:55:15.613401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.521 [2024-12-05 10:55:15.665231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.778 [2024-12-05 10:55:15.706891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:48.778  [2024-12-05T10:55:16.197Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:49.038 00:11:49.038 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:49.038 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:49.038 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:11:49.038 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:11:49.038 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:11:49.038 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:49.038 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:49.038 [2024-12-05 10:55:16.082318] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:49.038 [2024-12-05 10:55:16.082400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60979 ] 00:11:49.038 { 00:11:49.038 "subsystems": [ 00:11:49.038 { 00:11:49.038 "subsystem": "bdev", 00:11:49.038 "config": [ 00:11:49.038 { 00:11:49.038 "params": { 00:11:49.038 "trtype": "pcie", 00:11:49.038 "traddr": "0000:00:10.0", 00:11:49.038 "name": "Nvme0" 00:11:49.038 }, 00:11:49.038 "method": "bdev_nvme_attach_controller" 00:11:49.038 }, 00:11:49.038 { 00:11:49.038 "params": { 00:11:49.038 "trtype": "pcie", 00:11:49.038 "traddr": "0000:00:11.0", 00:11:49.038 "name": "Nvme1" 00:11:49.038 }, 00:11:49.038 "method": "bdev_nvme_attach_controller" 00:11:49.038 }, 00:11:49.038 { 00:11:49.038 "method": "bdev_wait_for_examine" 00:11:49.038 } 00:11:49.038 ] 00:11:49.038 } 00:11:49.038 ] 00:11:49.038 } 00:11:49.297 [2024-12-05 10:55:16.232449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.297 [2024-12-05 10:55:16.284084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.297 [2024-12-05 10:55:16.325725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.556  [2024-12-05T10:55:16.974Z] Copying: 65/65 [MB] (average 942 MBps) 00:11:49.815 00:11:49.815 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:11:49.815 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:11:49.815 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:11:49.815 10:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:49.815 [2024-12-05 10:55:16.827182] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:49.815 [2024-12-05 10:55:16.827262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60993 ] 00:11:49.815 { 00:11:49.816 "subsystems": [ 00:11:49.816 { 00:11:49.816 "subsystem": "bdev", 00:11:49.816 "config": [ 00:11:49.816 { 00:11:49.816 "params": { 00:11:49.816 "trtype": "pcie", 00:11:49.816 "traddr": "0000:00:10.0", 00:11:49.816 "name": "Nvme0" 00:11:49.816 }, 00:11:49.816 "method": "bdev_nvme_attach_controller" 00:11:49.816 }, 00:11:49.816 { 00:11:49.816 "params": { 00:11:49.816 "trtype": "pcie", 00:11:49.816 "traddr": "0000:00:11.0", 00:11:49.816 "name": "Nvme1" 00:11:49.816 }, 00:11:49.816 "method": "bdev_nvme_attach_controller" 00:11:49.816 }, 00:11:49.816 { 00:11:49.816 "method": "bdev_wait_for_examine" 00:11:49.816 } 00:11:49.816 ] 00:11:49.816 } 00:11:49.816 ] 00:11:49.816 } 00:11:49.816 [2024-12-05 10:55:16.975743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.075 [2024-12-05 10:55:17.028399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.075 [2024-12-05 10:55:17.070554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:50.334  [2024-12-05T10:55:17.493Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:11:50.334 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:11:50.334 00:11:50.334 real 0m2.758s 00:11:50.334 user 0m1.976s 00:11:50.334 sys 0m0.830s 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:11:50.334 ************************************ 00:11:50.334 END TEST dd_offset_magic 00:11:50.334 ************************************ 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:50.334 10:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:50.594 [2024-12-05 10:55:17.518632] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:50.594 [2024-12-05 10:55:17.518708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61030 ] 00:11:50.594 { 00:11:50.594 "subsystems": [ 00:11:50.594 { 00:11:50.594 "subsystem": "bdev", 00:11:50.594 "config": [ 00:11:50.594 { 00:11:50.594 "params": { 00:11:50.594 "trtype": "pcie", 00:11:50.594 "traddr": "0000:00:10.0", 00:11:50.594 "name": "Nvme0" 00:11:50.594 }, 00:11:50.594 "method": "bdev_nvme_attach_controller" 00:11:50.594 }, 00:11:50.594 { 00:11:50.594 "params": { 00:11:50.594 "trtype": "pcie", 00:11:50.594 "traddr": "0000:00:11.0", 00:11:50.594 "name": "Nvme1" 00:11:50.594 }, 00:11:50.594 "method": "bdev_nvme_attach_controller" 00:11:50.594 }, 00:11:50.594 { 00:11:50.594 "method": "bdev_wait_for_examine" 00:11:50.594 } 00:11:50.594 ] 00:11:50.594 } 00:11:50.594 ] 00:11:50.594 } 00:11:50.594 [2024-12-05 10:55:17.667323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.594 [2024-12-05 10:55:17.720554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.852 [2024-12-05 10:55:17.763895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:50.852  [2024-12-05T10:55:18.268Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:11:51.110 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:11:51.110 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:51.110 [2024-12-05 10:55:18.167319] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:51.110 [2024-12-05 10:55:18.167396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61046 ] 00:11:51.110 { 00:11:51.110 "subsystems": [ 00:11:51.110 { 00:11:51.110 "subsystem": "bdev", 00:11:51.110 "config": [ 00:11:51.110 { 00:11:51.110 "params": { 00:11:51.110 "trtype": "pcie", 00:11:51.110 "traddr": "0000:00:10.0", 00:11:51.110 "name": "Nvme0" 00:11:51.110 }, 00:11:51.110 "method": "bdev_nvme_attach_controller" 00:11:51.110 }, 00:11:51.110 { 00:11:51.110 "params": { 00:11:51.110 "trtype": "pcie", 00:11:51.110 "traddr": "0000:00:11.0", 00:11:51.110 "name": "Nvme1" 00:11:51.110 }, 00:11:51.110 "method": "bdev_nvme_attach_controller" 00:11:51.110 }, 00:11:51.110 { 00:11:51.110 "method": "bdev_wait_for_examine" 00:11:51.110 } 00:11:51.110 ] 00:11:51.110 } 00:11:51.110 ] 00:11:51.110 } 00:11:51.369 [2024-12-05 10:55:18.319183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.369 [2024-12-05 10:55:18.370297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.369 [2024-12-05 10:55:18.412199] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:51.629  [2024-12-05T10:55:18.788Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:11:51.629 00:11:51.629 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:11:51.629 00:11:51.629 real 0m6.609s 00:11:51.629 user 0m4.716s 00:11:51.629 sys 0m3.089s 00:11:51.629 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.629 10:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:11:51.629 ************************************ 00:11:51.629 END TEST spdk_dd_bdev_to_bdev 00:11:51.949 ************************************ 00:11:51.949 10:55:18 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:11:51.949 10:55:18 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:51.949 10:55:18 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.949 10:55:18 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.949 10:55:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:11:51.949 ************************************ 00:11:51.949 START TEST spdk_dd_uring 00:11:51.949 ************************************ 00:11:51.949 10:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:11:51.949 * Looking for test storage... 00:11:51.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:11:51.949 10:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.949 10:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.949 10:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:52.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.210 --rc genhtml_branch_coverage=1 00:11:52.210 --rc genhtml_function_coverage=1 00:11:52.210 --rc genhtml_legend=1 00:11:52.210 --rc geninfo_all_blocks=1 00:11:52.210 --rc geninfo_unexecuted_blocks=1 00:11:52.210 00:11:52.210 ' 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:52.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.210 --rc genhtml_branch_coverage=1 00:11:52.210 --rc genhtml_function_coverage=1 00:11:52.210 --rc genhtml_legend=1 00:11:52.210 --rc geninfo_all_blocks=1 00:11:52.210 --rc geninfo_unexecuted_blocks=1 00:11:52.210 00:11:52.210 ' 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:52.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.210 --rc genhtml_branch_coverage=1 00:11:52.210 --rc genhtml_function_coverage=1 00:11:52.210 --rc genhtml_legend=1 00:11:52.210 --rc geninfo_all_blocks=1 00:11:52.210 --rc geninfo_unexecuted_blocks=1 00:11:52.210 00:11:52.210 ' 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:52.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.210 --rc genhtml_branch_coverage=1 00:11:52.210 --rc genhtml_function_coverage=1 00:11:52.210 --rc genhtml_legend=1 00:11:52.210 --rc geninfo_all_blocks=1 00:11:52.210 --rc geninfo_unexecuted_blocks=1 00:11:52.210 00:11:52.210 ' 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:11:52.210 ************************************ 00:11:52.210 START TEST dd_uring_copy 00:11:52.210 ************************************ 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:11:52.210 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=dkwx5ffj7qesg6i8c3o8g2gn2wrci475cex0vdh9x5p5sxi8s8lgqq9zindluvju4ka7srp71syagfugy7frwmrpxnra4cg18zl25g6cnkvd8dvokfie0ivk05ujdo719gutmmm6oed2vm884ht8sbdakz2o22m75cyhzlfb7w9frtgwvue3fa6g22jok2ui96o96n1uvn1z666mylp3i90d9rwsyblcfpskdnuc7e0rjqop07dc05vajf2078fvhudzpo8lzef8kwgyvx5n7mahrmajjcccefja9zb6cw81fnhnqnmb82v7ro0rzhpfy9pn4kkgsmqk2iswzirdx0vps752pswvr5x9p50tfoe87stnis9mde1u5j1veifot19intdobbyptkgtdp23ab7cxrvhmpwpal8o9qv02wy7t0r9jmwqt0pusvag0dwzv7gei4mp6uy8eebm9s9u42eh25ea2l18wy487ns84oeu0cn2lmc5enfsdv0uke8geptcseuztqpb1f1weug2s5vjz9y692ukbxiwel24b1dkibgqsnyqxpbshku3ectecxmiemrpnym1a0l3k6bth3gn9tk4118bncq8kae4oothte0nbtgsnzg9r9kzy717eg3esfokhtxtrd6o86uhch2v97brt6tw8uoafb759jtydbfrfaj42auw9bblyl4wfg5ckor7vzl0khk0n0pd3ggkfngy6t86jaht3cl6o9i5tcggemplep5naql9po0bcrpq9hwi8emdvon2rtqh6x9635wejqlpoqjld0ex0gmcz9vcceqd6jexwymkie0tatukx6jangcin27r1dbjs25zjhtyvu8s0y9cttledmlgg79d394zbxj8qaf3l5d038z8ckswr0ytn6abwkugrzozfagxoaz9btag739nd0clemqcugnyqeghlts6cntxeflcw7udag1nrd42hxzn1bucbyn3u1zcduflypld6yqpzuq8rpm23qg1e5bb8s3h 00:11:52.211 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo dkwx5ffj7qesg6i8c3o8g2gn2wrci475cex0vdh9x5p5sxi8s8lgqq9zindluvju4ka7srp71syagfugy7frwmrpxnra4cg18zl25g6cnkvd8dvokfie0ivk05ujdo719gutmmm6oed2vm884ht8sbdakz2o22m75cyhzlfb7w9frtgwvue3fa6g22jok2ui96o96n1uvn1z666mylp3i90d9rwsyblcfpskdnuc7e0rjqop07dc05vajf2078fvhudzpo8lzef8kwgyvx5n7mahrmajjcccefja9zb6cw81fnhnqnmb82v7ro0rzhpfy9pn4kkgsmqk2iswzirdx0vps752pswvr5x9p50tfoe87stnis9mde1u5j1veifot19intdobbyptkgtdp23ab7cxrvhmpwpal8o9qv02wy7t0r9jmwqt0pusvag0dwzv7gei4mp6uy8eebm9s9u42eh25ea2l18wy487ns84oeu0cn2lmc5enfsdv0uke8geptcseuztqpb1f1weug2s5vjz9y692ukbxiwel24b1dkibgqsnyqxpbshku3ectecxmiemrpnym1a0l3k6bth3gn9tk4118bncq8kae4oothte0nbtgsnzg9r9kzy717eg3esfokhtxtrd6o86uhch2v97brt6tw8uoafb759jtydbfrfaj42auw9bblyl4wfg5ckor7vzl0khk0n0pd3ggkfngy6t86jaht3cl6o9i5tcggemplep5naql9po0bcrpq9hwi8emdvon2rtqh6x9635wejqlpoqjld0ex0gmcz9vcceqd6jexwymkie0tatukx6jangcin27r1dbjs25zjhtyvu8s0y9cttledmlgg79d394zbxj8qaf3l5d038z8ckswr0ytn6abwkugrzozfagxoaz9btag739nd0clemqcugnyqeghlts6cntxeflcw7udag1nrd42hxzn1bucbyn3u1zcduflypld6yqpzuq8rpm23qg1e5bb8s3h 00:11:52.211 10:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:11:52.211 [2024-12-05 10:55:19.194712] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:52.211 [2024-12-05 10:55:19.194790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61125 ] 00:11:52.211 [2024-12-05 10:55:19.347019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.470 [2024-12-05 10:55:19.399880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.470 [2024-12-05 10:55:19.441279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:53.036  [2024-12-05T10:55:20.453Z] Copying: 511/511 [MB] (average 1322 MBps) 00:11:53.294 00:11:53.294 10:55:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:11:53.294 10:55:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:11:53.294 10:55:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:53.294 10:55:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:53.294 [2024-12-05 10:55:20.391722] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:53.294 [2024-12-05 10:55:20.391804] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61142 ] 00:11:53.294 { 00:11:53.294 "subsystems": [ 00:11:53.294 { 00:11:53.294 "subsystem": "bdev", 00:11:53.294 "config": [ 00:11:53.294 { 00:11:53.294 "params": { 00:11:53.294 "block_size": 512, 00:11:53.294 "num_blocks": 1048576, 00:11:53.294 "name": "malloc0" 00:11:53.294 }, 00:11:53.294 "method": "bdev_malloc_create" 00:11:53.294 }, 00:11:53.294 { 00:11:53.294 "params": { 00:11:53.294 "filename": "/dev/zram1", 00:11:53.294 "name": "uring0" 00:11:53.294 }, 00:11:53.294 "method": "bdev_uring_create" 00:11:53.294 }, 00:11:53.294 { 00:11:53.294 "method": "bdev_wait_for_examine" 00:11:53.294 } 00:11:53.294 ] 00:11:53.294 } 00:11:53.294 ] 00:11:53.294 } 00:11:53.639 [2024-12-05 10:55:20.545516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.639 [2024-12-05 10:55:20.595876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.639 [2024-12-05 10:55:20.637606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.015  [2024-12-05T10:55:23.112Z] Copying: 245/512 [MB] (245 MBps) [2024-12-05T10:55:23.112Z] Copying: 497/512 [MB] (252 MBps) [2024-12-05T10:55:23.371Z] Copying: 512/512 [MB] (average 249 MBps) 00:11:56.212 00:11:56.212 10:55:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:11:56.212 10:55:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:11:56.212 10:55:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:56.212 10:55:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:56.212 [2024-12-05 10:55:23.220349] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:56.212 [2024-12-05 10:55:23.220416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61187 ] 00:11:56.212 { 00:11:56.212 "subsystems": [ 00:11:56.212 { 00:11:56.212 "subsystem": "bdev", 00:11:56.212 "config": [ 00:11:56.212 { 00:11:56.212 "params": { 00:11:56.212 "block_size": 512, 00:11:56.212 "num_blocks": 1048576, 00:11:56.212 "name": "malloc0" 00:11:56.212 }, 00:11:56.212 "method": "bdev_malloc_create" 00:11:56.212 }, 00:11:56.212 { 00:11:56.212 "params": { 00:11:56.212 "filename": "/dev/zram1", 00:11:56.212 "name": "uring0" 00:11:56.212 }, 00:11:56.212 "method": "bdev_uring_create" 00:11:56.212 }, 00:11:56.212 { 00:11:56.212 "method": "bdev_wait_for_examine" 00:11:56.212 } 00:11:56.212 ] 00:11:56.212 } 00:11:56.212 ] 00:11:56.212 } 00:11:56.212 [2024-12-05 10:55:23.370419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.471 [2024-12-05 10:55:23.422498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.471 [2024-12-05 10:55:23.464345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:57.848  [2024-12-05T10:55:25.951Z] Copying: 214/512 [MB] (214 MBps) [2024-12-05T10:55:26.209Z] Copying: 409/512 [MB] (194 MBps) [2024-12-05T10:55:26.467Z] Copying: 512/512 [MB] (average 206 MBps) 00:11:59.308 00:11:59.308 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:11:59.308 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ dkwx5ffj7qesg6i8c3o8g2gn2wrci475cex0vdh9x5p5sxi8s8lgqq9zindluvju4ka7srp71syagfugy7frwmrpxnra4cg18zl25g6cnkvd8dvokfie0ivk05ujdo719gutmmm6oed2vm884ht8sbdakz2o22m75cyhzlfb7w9frtgwvue3fa6g22jok2ui96o96n1uvn1z666mylp3i90d9rwsyblcfpskdnuc7e0rjqop07dc05vajf2078fvhudzpo8lzef8kwgyvx5n7mahrmajjcccefja9zb6cw81fnhnqnmb82v7ro0rzhpfy9pn4kkgsmqk2iswzirdx0vps752pswvr5x9p50tfoe87stnis9mde1u5j1veifot19intdobbyptkgtdp23ab7cxrvhmpwpal8o9qv02wy7t0r9jmwqt0pusvag0dwzv7gei4mp6uy8eebm9s9u42eh25ea2l18wy487ns84oeu0cn2lmc5enfsdv0uke8geptcseuztqpb1f1weug2s5vjz9y692ukbxiwel24b1dkibgqsnyqxpbshku3ectecxmiemrpnym1a0l3k6bth3gn9tk4118bncq8kae4oothte0nbtgsnzg9r9kzy717eg3esfokhtxtrd6o86uhch2v97brt6tw8uoafb759jtydbfrfaj42auw9bblyl4wfg5ckor7vzl0khk0n0pd3ggkfngy6t86jaht3cl6o9i5tcggemplep5naql9po0bcrpq9hwi8emdvon2rtqh6x9635wejqlpoqjld0ex0gmcz9vcceqd6jexwymkie0tatukx6jangcin27r1dbjs25zjhtyvu8s0y9cttledmlgg79d394zbxj8qaf3l5d038z8ckswr0ytn6abwkugrzozfagxoaz9btag739nd0clemqcugnyqeghlts6cntxeflcw7udag1nrd42hxzn1bucbyn3u1zcduflypld6yqpzuq8rpm23qg1e5bb8s3h == \d\k\w\x\5\f\f\j\7\q\e\s\g\6\i\8\c\3\o\8\g\2\g\n\2\w\r\c\i\4\7\5\c\e\x\0\v\d\h\9\x\5\p\5\s\x\i\8\s\8\l\g\q\q\9\z\i\n\d\l\u\v\j\u\4\k\a\7\s\r\p\7\1\s\y\a\g\f\u\g\y\7\f\r\w\m\r\p\x\n\r\a\4\c\g\1\8\z\l\2\5\g\6\c\n\k\v\d\8\d\v\o\k\f\i\e\0\i\v\k\0\5\u\j\d\o\7\1\9\g\u\t\m\m\m\6\o\e\d\2\v\m\8\8\4\h\t\8\s\b\d\a\k\z\2\o\2\2\m\7\5\c\y\h\z\l\f\b\7\w\9\f\r\t\g\w\v\u\e\3\f\a\6\g\2\2\j\o\k\2\u\i\9\6\o\9\6\n\1\u\v\n\1\z\6\6\6\m\y\l\p\3\i\9\0\d\9\r\w\s\y\b\l\c\f\p\s\k\d\n\u\c\7\e\0\r\j\q\o\p\0\7\d\c\0\5\v\a\j\f\2\0\7\8\f\v\h\u\d\z\p\o\8\l\z\e\f\8\k\w\g\y\v\x\5\n\7\m\a\h\r\m\a\j\j\c\c\c\e\f\j\a\9\z\b\6\c\w\8\1\f\n\h\n\q\n\m\b\8\2\v\7\r\o\0\r\z\h\p\f\y\9\p\n\4\k\k\g\s\m\q\k\2\i\s\w\z\i\r\d\x\0\v\p\s\7\5\2\p\s\w\v\r\5\x\9\p\5\0\t\f\o\e\8\7\s\t\n\i\s\9\m\d\e\1\u\5\j\1\v\e\i\f\o\t\1\9\i\n\t\d\o\b\b\y\p\t\k\g\t\d\p\2\3\a\b\7\c\x\r\v\h\m\p\w\p\a\l\8\o\9\q\v\0\2\w\y\7\t\0\r\9\j\m\w\q\t\0\p\u\s\v\a\g\0\d\w\z\v\7\g\e\i\4\m\p\6\u\y\8\e\e\b\m\9\s\9\u\4\2\e\h\2\5\e\a\2\l\1\8\w\y\4\8\7\n\s\8\4\o\e\u\0\c\n\2\l\m\c\5\e\n\f\s\d\v\0\u\k\e\8\g\e\p\t\c\s\e\u\z\t\q\p\b\1\f\1\w\e\u\g\2\s\5\v\j\z\9\y\6\9\2\u\k\b\x\i\w\e\l\2\4\b\1\d\k\i\b\g\q\s\n\y\q\x\p\b\s\h\k\u\3\e\c\t\e\c\x\m\i\e\m\r\p\n\y\m\1\a\0\l\3\k\6\b\t\h\3\g\n\9\t\k\4\1\1\8\b\n\c\q\8\k\a\e\4\o\o\t\h\t\e\0\n\b\t\g\s\n\z\g\9\r\9\k\z\y\7\1\7\e\g\3\e\s\f\o\k\h\t\x\t\r\d\6\o\8\6\u\h\c\h\2\v\9\7\b\r\t\6\t\w\8\u\o\a\f\b\7\5\9\j\t\y\d\b\f\r\f\a\j\4\2\a\u\w\9\b\b\l\y\l\4\w\f\g\5\c\k\o\r\7\v\z\l\0\k\h\k\0\n\0\p\d\3\g\g\k\f\n\g\y\6\t\8\6\j\a\h\t\3\c\l\6\o\9\i\5\t\c\g\g\e\m\p\l\e\p\5\n\a\q\l\9\p\o\0\b\c\r\p\q\9\h\w\i\8\e\m\d\v\o\n\2\r\t\q\h\6\x\9\6\3\5\w\e\j\q\l\p\o\q\j\l\d\0\e\x\0\g\m\c\z\9\v\c\c\e\q\d\6\j\e\x\w\y\m\k\i\e\0\t\a\t\u\k\x\6\j\a\n\g\c\i\n\2\7\r\1\d\b\j\s\2\5\z\j\h\t\y\v\u\8\s\0\y\9\c\t\t\l\e\d\m\l\g\g\7\9\d\3\9\4\z\b\x\j\8\q\a\f\3\l\5\d\0\3\8\z\8\c\k\s\w\r\0\y\t\n\6\a\b\w\k\u\g\r\z\o\z\f\a\g\x\o\a\z\9\b\t\a\g\7\3\9\n\d\0\c\l\e\m\q\c\u\g\n\y\q\e\g\h\l\t\s\6\c\n\t\x\e\f\l\c\w\7\u\d\a\g\1\n\r\d\4\2\h\x\z\n\1\b\u\c\b\y\n\3\u\1\z\c\d\u\f\l\y\p\l\d\6\y\q\p\z\u\q\8\r\p\m\2\3\q\g\1\e\5\b\b\8\s\3\h ]] 00:11:59.308 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:11:59.308 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ dkwx5ffj7qesg6i8c3o8g2gn2wrci475cex0vdh9x5p5sxi8s8lgqq9zindluvju4ka7srp71syagfugy7frwmrpxnra4cg18zl25g6cnkvd8dvokfie0ivk05ujdo719gutmmm6oed2vm884ht8sbdakz2o22m75cyhzlfb7w9frtgwvue3fa6g22jok2ui96o96n1uvn1z666mylp3i90d9rwsyblcfpskdnuc7e0rjqop07dc05vajf2078fvhudzpo8lzef8kwgyvx5n7mahrmajjcccefja9zb6cw81fnhnqnmb82v7ro0rzhpfy9pn4kkgsmqk2iswzirdx0vps752pswvr5x9p50tfoe87stnis9mde1u5j1veifot19intdobbyptkgtdp23ab7cxrvhmpwpal8o9qv02wy7t0r9jmwqt0pusvag0dwzv7gei4mp6uy8eebm9s9u42eh25ea2l18wy487ns84oeu0cn2lmc5enfsdv0uke8geptcseuztqpb1f1weug2s5vjz9y692ukbxiwel24b1dkibgqsnyqxpbshku3ectecxmiemrpnym1a0l3k6bth3gn9tk4118bncq8kae4oothte0nbtgsnzg9r9kzy717eg3esfokhtxtrd6o86uhch2v97brt6tw8uoafb759jtydbfrfaj42auw9bblyl4wfg5ckor7vzl0khk0n0pd3ggkfngy6t86jaht3cl6o9i5tcggemplep5naql9po0bcrpq9hwi8emdvon2rtqh6x9635wejqlpoqjld0ex0gmcz9vcceqd6jexwymkie0tatukx6jangcin27r1dbjs25zjhtyvu8s0y9cttledmlgg79d394zbxj8qaf3l5d038z8ckswr0ytn6abwkugrzozfagxoaz9btag739nd0clemqcugnyqeghlts6cntxeflcw7udag1nrd42hxzn1bucbyn3u1zcduflypld6yqpzuq8rpm23qg1e5bb8s3h == \d\k\w\x\5\f\f\j\7\q\e\s\g\6\i\8\c\3\o\8\g\2\g\n\2\w\r\c\i\4\7\5\c\e\x\0\v\d\h\9\x\5\p\5\s\x\i\8\s\8\l\g\q\q\9\z\i\n\d\l\u\v\j\u\4\k\a\7\s\r\p\7\1\s\y\a\g\f\u\g\y\7\f\r\w\m\r\p\x\n\r\a\4\c\g\1\8\z\l\2\5\g\6\c\n\k\v\d\8\d\v\o\k\f\i\e\0\i\v\k\0\5\u\j\d\o\7\1\9\g\u\t\m\m\m\6\o\e\d\2\v\m\8\8\4\h\t\8\s\b\d\a\k\z\2\o\2\2\m\7\5\c\y\h\z\l\f\b\7\w\9\f\r\t\g\w\v\u\e\3\f\a\6\g\2\2\j\o\k\2\u\i\9\6\o\9\6\n\1\u\v\n\1\z\6\6\6\m\y\l\p\3\i\9\0\d\9\r\w\s\y\b\l\c\f\p\s\k\d\n\u\c\7\e\0\r\j\q\o\p\0\7\d\c\0\5\v\a\j\f\2\0\7\8\f\v\h\u\d\z\p\o\8\l\z\e\f\8\k\w\g\y\v\x\5\n\7\m\a\h\r\m\a\j\j\c\c\c\e\f\j\a\9\z\b\6\c\w\8\1\f\n\h\n\q\n\m\b\8\2\v\7\r\o\0\r\z\h\p\f\y\9\p\n\4\k\k\g\s\m\q\k\2\i\s\w\z\i\r\d\x\0\v\p\s\7\5\2\p\s\w\v\r\5\x\9\p\5\0\t\f\o\e\8\7\s\t\n\i\s\9\m\d\e\1\u\5\j\1\v\e\i\f\o\t\1\9\i\n\t\d\o\b\b\y\p\t\k\g\t\d\p\2\3\a\b\7\c\x\r\v\h\m\p\w\p\a\l\8\o\9\q\v\0\2\w\y\7\t\0\r\9\j\m\w\q\t\0\p\u\s\v\a\g\0\d\w\z\v\7\g\e\i\4\m\p\6\u\y\8\e\e\b\m\9\s\9\u\4\2\e\h\2\5\e\a\2\l\1\8\w\y\4\8\7\n\s\8\4\o\e\u\0\c\n\2\l\m\c\5\e\n\f\s\d\v\0\u\k\e\8\g\e\p\t\c\s\e\u\z\t\q\p\b\1\f\1\w\e\u\g\2\s\5\v\j\z\9\y\6\9\2\u\k\b\x\i\w\e\l\2\4\b\1\d\k\i\b\g\q\s\n\y\q\x\p\b\s\h\k\u\3\e\c\t\e\c\x\m\i\e\m\r\p\n\y\m\1\a\0\l\3\k\6\b\t\h\3\g\n\9\t\k\4\1\1\8\b\n\c\q\8\k\a\e\4\o\o\t\h\t\e\0\n\b\t\g\s\n\z\g\9\r\9\k\z\y\7\1\7\e\g\3\e\s\f\o\k\h\t\x\t\r\d\6\o\8\6\u\h\c\h\2\v\9\7\b\r\t\6\t\w\8\u\o\a\f\b\7\5\9\j\t\y\d\b\f\r\f\a\j\4\2\a\u\w\9\b\b\l\y\l\4\w\f\g\5\c\k\o\r\7\v\z\l\0\k\h\k\0\n\0\p\d\3\g\g\k\f\n\g\y\6\t\8\6\j\a\h\t\3\c\l\6\o\9\i\5\t\c\g\g\e\m\p\l\e\p\5\n\a\q\l\9\p\o\0\b\c\r\p\q\9\h\w\i\8\e\m\d\v\o\n\2\r\t\q\h\6\x\9\6\3\5\w\e\j\q\l\p\o\q\j\l\d\0\e\x\0\g\m\c\z\9\v\c\c\e\q\d\6\j\e\x\w\y\m\k\i\e\0\t\a\t\u\k\x\6\j\a\n\g\c\i\n\2\7\r\1\d\b\j\s\2\5\z\j\h\t\y\v\u\8\s\0\y\9\c\t\t\l\e\d\m\l\g\g\7\9\d\3\9\4\z\b\x\j\8\q\a\f\3\l\5\d\0\3\8\z\8\c\k\s\w\r\0\y\t\n\6\a\b\w\k\u\g\r\z\o\z\f\a\g\x\o\a\z\9\b\t\a\g\7\3\9\n\d\0\c\l\e\m\q\c\u\g\n\y\q\e\g\h\l\t\s\6\c\n\t\x\e\f\l\c\w\7\u\d\a\g\1\n\r\d\4\2\h\x\z\n\1\b\u\c\b\y\n\3\u\1\z\c\d\u\f\l\y\p\l\d\6\y\q\p\z\u\q\8\r\p\m\2\3\q\g\1\e\5\b\b\8\s\3\h ]] 00:11:59.308 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:11:59.874 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:11:59.874 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:11:59.874 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:11:59.874 10:55:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:11:59.874 [2024-12-05 10:55:26.863994] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:11:59.874 [2024-12-05 10:55:26.864068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61246 ] 00:11:59.874 { 00:11:59.874 "subsystems": [ 00:11:59.874 { 00:11:59.874 "subsystem": "bdev", 00:11:59.874 "config": [ 00:11:59.874 { 00:11:59.874 "params": { 00:11:59.874 "block_size": 512, 00:11:59.874 "num_blocks": 1048576, 00:11:59.874 "name": "malloc0" 00:11:59.874 }, 00:11:59.874 "method": "bdev_malloc_create" 00:11:59.874 }, 00:11:59.874 { 00:11:59.874 "params": { 00:11:59.874 "filename": "/dev/zram1", 00:11:59.874 "name": "uring0" 00:11:59.874 }, 00:11:59.874 "method": "bdev_uring_create" 00:11:59.874 }, 00:11:59.874 { 00:11:59.874 "method": "bdev_wait_for_examine" 00:11:59.874 } 00:11:59.874 ] 00:11:59.874 } 00:11:59.874 ] 00:11:59.874 } 00:11:59.874 [2024-12-05 10:55:27.016048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.132 [2024-12-05 10:55:27.066473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.132 [2024-12-05 10:55:27.108950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:01.512  [2024-12-05T10:55:29.612Z] Copying: 191/512 [MB] (191 MBps) [2024-12-05T10:55:30.178Z] Copying: 381/512 [MB] (190 MBps) [2024-12-05T10:55:30.436Z] Copying: 512/512 [MB] (average 190 MBps) 00:12:03.277 00:12:03.277 10:55:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:12:03.277 10:55:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:12:03.277 10:55:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:03.277 10:55:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:12:03.277 10:55:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:12:03.277 10:55:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:12:03.277 10:55:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:03.277 10:55:30 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:03.277 [2024-12-05 10:55:30.313237] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:03.277 [2024-12-05 10:55:30.313322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61296 ] 00:12:03.277 { 00:12:03.277 "subsystems": [ 00:12:03.277 { 00:12:03.277 "subsystem": "bdev", 00:12:03.277 "config": [ 00:12:03.277 { 00:12:03.277 "params": { 00:12:03.277 "block_size": 512, 00:12:03.277 "num_blocks": 1048576, 00:12:03.277 "name": "malloc0" 00:12:03.277 }, 00:12:03.277 "method": "bdev_malloc_create" 00:12:03.277 }, 00:12:03.277 { 00:12:03.277 "params": { 00:12:03.277 "filename": "/dev/zram1", 00:12:03.277 "name": "uring0" 00:12:03.277 }, 00:12:03.277 "method": "bdev_uring_create" 00:12:03.277 }, 00:12:03.277 { 00:12:03.277 "params": { 00:12:03.277 "name": "uring0" 00:12:03.277 }, 00:12:03.277 "method": "bdev_uring_delete" 00:12:03.277 }, 00:12:03.277 { 00:12:03.277 "method": "bdev_wait_for_examine" 00:12:03.277 } 00:12:03.277 ] 00:12:03.277 } 00:12:03.277 ] 00:12:03.277 } 00:12:03.535 [2024-12-05 10:55:30.462099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.535 [2024-12-05 10:55:30.505443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.535 [2024-12-05 10:55:30.546909] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:03.792  [2024-12-05T10:55:31.208Z] Copying: 0/0 [B] (average 0 Bps) 00:12:04.049 00:12:04.049 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:12:04.049 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:04.049 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:04.049 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:04.049 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:12:04.049 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:12:04.049 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:04.050 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:12:04.050 { 00:12:04.050 "subsystems": [ 00:12:04.050 { 00:12:04.050 "subsystem": "bdev", 00:12:04.050 "config": [ 00:12:04.050 { 00:12:04.050 "params": { 00:12:04.050 "block_size": 512, 00:12:04.050 "num_blocks": 1048576, 00:12:04.050 "name": "malloc0" 00:12:04.050 }, 00:12:04.050 "method": "bdev_malloc_create" 00:12:04.050 }, 00:12:04.050 { 00:12:04.050 "params": { 00:12:04.050 "filename": "/dev/zram1", 00:12:04.050 "name": "uring0" 00:12:04.050 }, 00:12:04.050 "method": "bdev_uring_create" 00:12:04.050 }, 00:12:04.050 { 00:12:04.050 "params": { 00:12:04.050 "name": "uring0" 00:12:04.050 }, 00:12:04.050 "method": "bdev_uring_delete" 00:12:04.050 }, 00:12:04.050 { 00:12:04.050 "method": "bdev_wait_for_examine" 00:12:04.050 } 00:12:04.050 ] 00:12:04.050 } 00:12:04.050 ] 00:12:04.050 } 00:12:04.050 [2024-12-05 10:55:31.104887] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:04.050 [2024-12-05 10:55:31.104953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61330 ] 00:12:04.308 [2024-12-05 10:55:31.254960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.308 [2024-12-05 10:55:31.306078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.308 [2024-12-05 10:55:31.347744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:04.567 [2024-12-05 10:55:31.517813] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:12:04.567 [2024-12-05 10:55:31.517873] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:12:04.567 [2024-12-05 10:55:31.517883] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:12:04.567 [2024-12-05 10:55:31.517893] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:04.826 [2024-12-05 10:55:31.768303] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:12:04.826 10:55:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:12:05.084 00:12:05.084 real 0m13.058s 00:12:05.084 user 0m8.449s 00:12:05.084 sys 0m11.271s 00:12:05.084 10:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.084 10:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:12:05.084 ************************************ 00:12:05.084 END TEST dd_uring_copy 00:12:05.084 ************************************ 00:12:05.084 00:12:05.084 real 0m13.371s 00:12:05.084 user 0m8.611s 00:12:05.084 sys 0m11.442s 00:12:05.084 10:55:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.084 ************************************ 00:12:05.084 END TEST spdk_dd_uring 00:12:05.084 ************************************ 00:12:05.084 10:55:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:12:05.343 10:55:32 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:05.343 10:55:32 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:05.343 10:55:32 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.343 10:55:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:05.343 ************************************ 00:12:05.343 START TEST spdk_dd_sparse 00:12:05.343 ************************************ 00:12:05.343 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:12:05.343 * Looking for test storage... 00:12:05.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:05.343 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.343 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.343 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.623 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.623 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.623 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.623 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.623 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.623 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.624 --rc genhtml_branch_coverage=1 00:12:05.624 --rc genhtml_function_coverage=1 00:12:05.624 --rc genhtml_legend=1 00:12:05.624 --rc geninfo_all_blocks=1 00:12:05.624 --rc geninfo_unexecuted_blocks=1 00:12:05.624 00:12:05.624 ' 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.624 --rc genhtml_branch_coverage=1 00:12:05.624 --rc genhtml_function_coverage=1 00:12:05.624 --rc genhtml_legend=1 00:12:05.624 --rc geninfo_all_blocks=1 00:12:05.624 --rc geninfo_unexecuted_blocks=1 00:12:05.624 00:12:05.624 ' 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.624 --rc genhtml_branch_coverage=1 00:12:05.624 --rc genhtml_function_coverage=1 00:12:05.624 --rc genhtml_legend=1 00:12:05.624 --rc geninfo_all_blocks=1 00:12:05.624 --rc geninfo_unexecuted_blocks=1 00:12:05.624 00:12:05.624 ' 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.624 --rc genhtml_branch_coverage=1 00:12:05.624 --rc genhtml_function_coverage=1 00:12:05.624 --rc genhtml_legend=1 00:12:05.624 --rc geninfo_all_blocks=1 00:12:05.624 --rc geninfo_unexecuted_blocks=1 00:12:05.624 00:12:05.624 ' 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:12:05.624 1+0 records in 00:12:05.624 1+0 records out 00:12:05.624 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00986814 s, 425 MB/s 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:12:05.624 1+0 records in 00:12:05.624 1+0 records out 00:12:05.624 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00926483 s, 453 MB/s 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:12:05.624 1+0 records in 00:12:05.624 1+0 records out 00:12:05.624 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00976415 s, 430 MB/s 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:05.624 ************************************ 00:12:05.624 START TEST dd_sparse_file_to_file 00:12:05.624 ************************************ 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:05.624 10:55:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:05.624 [2024-12-05 10:55:32.661117] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:05.624 [2024-12-05 10:55:32.661196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61433 ] 00:12:05.624 { 00:12:05.624 "subsystems": [ 00:12:05.624 { 00:12:05.624 "subsystem": "bdev", 00:12:05.624 "config": [ 00:12:05.624 { 00:12:05.624 "params": { 00:12:05.624 "block_size": 4096, 00:12:05.624 "filename": "dd_sparse_aio_disk", 00:12:05.624 "name": "dd_aio" 00:12:05.624 }, 00:12:05.624 "method": "bdev_aio_create" 00:12:05.624 }, 00:12:05.624 { 00:12:05.624 "params": { 00:12:05.624 "lvs_name": "dd_lvstore", 00:12:05.624 "bdev_name": "dd_aio" 00:12:05.624 }, 00:12:05.624 "method": "bdev_lvol_create_lvstore" 00:12:05.624 }, 00:12:05.624 { 00:12:05.624 "method": "bdev_wait_for_examine" 00:12:05.624 } 00:12:05.624 ] 00:12:05.624 } 00:12:05.624 ] 00:12:05.625 } 00:12:05.920 [2024-12-05 10:55:32.809881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.920 [2024-12-05 10:55:32.859953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.920 [2024-12-05 10:55:32.902675] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:05.920  [2024-12-05T10:55:33.337Z] Copying: 12/36 [MB] (average 800 MBps) 00:12:06.178 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:12:06.178 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:06.178 00:12:06.178 real 0m0.608s 00:12:06.178 user 0m0.349s 00:12:06.178 sys 0m0.335s 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:06.179 ************************************ 00:12:06.179 END TEST dd_sparse_file_to_file 00:12:06.179 ************************************ 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:06.179 ************************************ 00:12:06.179 START TEST dd_sparse_file_to_bdev 00:12:06.179 ************************************ 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:12:06.179 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:06.179 [2024-12-05 10:55:33.333420] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:06.179 [2024-12-05 10:55:33.333500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61470 ] 00:12:06.437 { 00:12:06.437 "subsystems": [ 00:12:06.437 { 00:12:06.437 "subsystem": "bdev", 00:12:06.437 "config": [ 00:12:06.437 { 00:12:06.437 "params": { 00:12:06.438 "block_size": 4096, 00:12:06.438 "filename": "dd_sparse_aio_disk", 00:12:06.438 "name": "dd_aio" 00:12:06.438 }, 00:12:06.438 "method": "bdev_aio_create" 00:12:06.438 }, 00:12:06.438 { 00:12:06.438 "params": { 00:12:06.438 "lvs_name": "dd_lvstore", 00:12:06.438 "lvol_name": "dd_lvol", 00:12:06.438 "size_in_mib": 36, 00:12:06.438 "thin_provision": true 00:12:06.438 }, 00:12:06.438 "method": "bdev_lvol_create" 00:12:06.438 }, 00:12:06.438 { 00:12:06.438 "method": "bdev_wait_for_examine" 00:12:06.438 } 00:12:06.438 ] 00:12:06.438 } 00:12:06.438 ] 00:12:06.438 } 00:12:06.438 [2024-12-05 10:55:33.481955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.438 [2024-12-05 10:55:33.533925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.438 [2024-12-05 10:55:33.576014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:06.697  [2024-12-05T10:55:33.856Z] Copying: 12/36 [MB] (average 428 MBps) 00:12:06.697 00:12:06.697 00:12:06.697 real 0m0.565s 00:12:06.697 user 0m0.348s 00:12:06.697 sys 0m0.310s 00:12:06.697 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.697 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:12:06.697 ************************************ 00:12:06.697 END TEST dd_sparse_file_to_bdev 00:12:06.697 ************************************ 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:06.957 ************************************ 00:12:06.957 START TEST dd_sparse_bdev_to_file 00:12:06.957 ************************************ 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:12:06.957 10:55:33 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:06.957 [2024-12-05 10:55:33.961547] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:06.957 [2024-12-05 10:55:33.961652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61508 ] 00:12:06.957 { 00:12:06.957 "subsystems": [ 00:12:06.957 { 00:12:06.957 "subsystem": "bdev", 00:12:06.957 "config": [ 00:12:06.957 { 00:12:06.957 "params": { 00:12:06.957 "block_size": 4096, 00:12:06.957 "filename": "dd_sparse_aio_disk", 00:12:06.957 "name": "dd_aio" 00:12:06.957 }, 00:12:06.957 "method": "bdev_aio_create" 00:12:06.957 }, 00:12:06.957 { 00:12:06.957 "method": "bdev_wait_for_examine" 00:12:06.957 } 00:12:06.957 ] 00:12:06.957 } 00:12:06.957 ] 00:12:06.957 } 00:12:07.216 [2024-12-05 10:55:34.120133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.216 [2024-12-05 10:55:34.169812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.216 [2024-12-05 10:55:34.211733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:07.216  [2024-12-05T10:55:34.635Z] Copying: 12/36 [MB] (average 705 MBps) 00:12:07.476 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:12:07.476 ************************************ 00:12:07.476 END TEST dd_sparse_bdev_to_file 00:12:07.476 ************************************ 00:12:07.476 00:12:07.476 real 0m0.586s 00:12:07.476 user 0m0.362s 00:12:07.476 sys 0m0.309s 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:12:07.476 00:12:07.476 real 0m2.295s 00:12:07.476 user 0m1.281s 00:12:07.476 sys 0m1.279s 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.476 10:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:12:07.476 ************************************ 00:12:07.476 END TEST spdk_dd_sparse 00:12:07.476 ************************************ 00:12:07.735 10:55:34 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:07.735 10:55:34 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.735 10:55:34 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.735 10:55:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:07.735 ************************************ 00:12:07.735 START TEST spdk_dd_negative 00:12:07.735 ************************************ 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:12:07.735 * Looking for test storage... 00:12:07.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:12:07.735 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:07.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.736 --rc genhtml_branch_coverage=1 00:12:07.736 --rc genhtml_function_coverage=1 00:12:07.736 --rc genhtml_legend=1 00:12:07.736 --rc geninfo_all_blocks=1 00:12:07.736 --rc geninfo_unexecuted_blocks=1 00:12:07.736 00:12:07.736 ' 00:12:07.736 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:07.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.736 --rc genhtml_branch_coverage=1 00:12:07.736 --rc genhtml_function_coverage=1 00:12:07.736 --rc genhtml_legend=1 00:12:07.736 --rc geninfo_all_blocks=1 00:12:07.736 --rc geninfo_unexecuted_blocks=1 00:12:07.736 00:12:07.736 ' 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.013 --rc genhtml_branch_coverage=1 00:12:08.013 --rc genhtml_function_coverage=1 00:12:08.013 --rc genhtml_legend=1 00:12:08.013 --rc geninfo_all_blocks=1 00:12:08.013 --rc geninfo_unexecuted_blocks=1 00:12:08.013 00:12:08.013 ' 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.013 --rc genhtml_branch_coverage=1 00:12:08.013 --rc genhtml_function_coverage=1 00:12:08.013 --rc genhtml_legend=1 00:12:08.013 --rc geninfo_all_blocks=1 00:12:08.013 --rc geninfo_unexecuted_blocks=1 00:12:08.013 00:12:08.013 ' 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:08.013 ************************************ 00:12:08.013 START TEST dd_invalid_arguments 00:12:08.013 ************************************ 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.013 10:55:34 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:12:08.014 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:12:08.014 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:12:08.014 00:12:08.014 CPU options: 00:12:08.014 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:12:08.014 (like [0,1,10]) 00:12:08.014 --lcores lcore to CPU mapping list. The list is in the format: 00:12:08.014 [<,lcores[@CPUs]>...] 00:12:08.014 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:12:08.014 Within the group, '-' is used for range separator, 00:12:08.014 ',' is used for single number separator. 00:12:08.014 '( )' can be omitted for single element group, 00:12:08.014 '@' can be omitted if cpus and lcores have the same value 00:12:08.014 --disable-cpumask-locks Disable CPU core lock files. 00:12:08.014 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:12:08.014 pollers in the app support interrupt mode) 00:12:08.014 -p, --main-core main (primary) core for DPDK 00:12:08.014 00:12:08.014 Configuration options: 00:12:08.014 -c, --config, --json JSON config file 00:12:08.014 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:12:08.014 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:12:08.014 --wait-for-rpc wait for RPCs to initialize subsystems 00:12:08.014 --rpcs-allowed comma-separated list of permitted RPCS 00:12:08.014 --json-ignore-init-errors don't exit on invalid config entry 00:12:08.014 00:12:08.014 Memory options: 00:12:08.014 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:12:08.014 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:12:08.014 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:12:08.014 -R, --huge-unlink unlink huge files after initialization 00:12:08.014 -n, --mem-channels number of memory channels used for DPDK 00:12:08.014 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:12:08.014 --msg-mempool-size global message memory pool size in count (default: 262143) 00:12:08.014 --no-huge run without using hugepages 00:12:08.014 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:12:08.014 -i, --shm-id shared memory ID (optional) 00:12:08.014 -g, --single-file-segments force creating just one hugetlbfs file 00:12:08.014 00:12:08.014 PCI options: 00:12:08.014 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:12:08.014 -B, --pci-blocked pci addr to block (can be used more than once) 00:12:08.014 -u, --no-pci disable PCI access 00:12:08.014 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:12:08.014 00:12:08.014 Log options: 00:12:08.014 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:12:08.014 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:12:08.014 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:12:08.014 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:12:08.014 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:12:08.014 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:12:08.014 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:12:08.014 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:12:08.014 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:12:08.014 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:12:08.014 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:12:08.014 --silence-noticelog disable notice level logging to stderr 00:12:08.014 00:12:08.014 Trace options: 00:12:08.014 --num-trace-entries number of trace entries for each core, must be power of 2, 00:12:08.014 setting 0 to disable trace (default 32768) 00:12:08.014 Tracepoints vary in size and can use more than one trace entry. 00:12:08.014 -e, --tpoint-group [:] 00:12:08.014 [2024-12-05 10:55:34.988955] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:12:08.014 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:12:08.014 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:12:08.014 bdev_raid, scheduler, all). 00:12:08.014 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:12:08.014 a tracepoint group. First tpoint inside a group can be enabled by 00:12:08.014 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:12:08.014 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:12:08.014 in /include/spdk_internal/trace_defs.h 00:12:08.014 00:12:08.014 Other options: 00:12:08.014 -h, --help show this usage 00:12:08.014 -v, --version print SPDK version 00:12:08.014 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:12:08.014 --env-context Opaque context for use of the env implementation 00:12:08.014 00:12:08.014 Application specific: 00:12:08.014 [--------- DD Options ---------] 00:12:08.014 --if Input file. Must specify either --if or --ib. 00:12:08.014 --ib Input bdev. Must specifier either --if or --ib 00:12:08.014 --of Output file. Must specify either --of or --ob. 00:12:08.014 --ob Output bdev. Must specify either --of or --ob. 00:12:08.014 --iflag Input file flags. 00:12:08.014 --oflag Output file flags. 00:12:08.014 --bs I/O unit size (default: 4096) 00:12:08.014 --qd Queue depth (default: 2) 00:12:08.014 --count I/O unit count. The number of I/O units to copy. (default: all) 00:12:08.014 --skip Skip this many I/O units at start of input. (default: 0) 00:12:08.014 --seek Skip this many I/O units at start of output. (default: 0) 00:12:08.014 --aio Force usage of AIO. (by default io_uring is used if available) 00:12:08.014 --sparse Enable hole skipping in input target 00:12:08.014 Available iflag and oflag values: 00:12:08.014 append - append mode 00:12:08.014 direct - use direct I/O for data 00:12:08.014 directory - fail unless a directory 00:12:08.014 dsync - use synchronized I/O for data 00:12:08.014 noatime - do not update access time 00:12:08.014 noctty - do not assign controlling terminal from file 00:12:08.014 nofollow - do not follow symlinks 00:12:08.014 nonblock - use non-blocking I/O 00:12:08.014 sync - use synchronized I/O for data and metadata 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.014 00:12:08.014 real 0m0.071s 00:12:08.014 user 0m0.032s 00:12:08.014 sys 0m0.038s 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:12:08.014 ************************************ 00:12:08.014 END TEST dd_invalid_arguments 00:12:08.014 ************************************ 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:08.014 ************************************ 00:12:08.014 START TEST dd_double_input 00:12:08.014 ************************************ 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.014 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.017 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:12:08.017 [2024-12-05 10:55:35.128172] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:12:08.017 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:12:08.017 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.018 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.018 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.018 00:12:08.018 real 0m0.071s 00:12:08.018 user 0m0.036s 00:12:08.018 sys 0m0.034s 00:12:08.018 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.018 10:55:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:12:08.018 ************************************ 00:12:08.018 END TEST dd_double_input 00:12:08.018 ************************************ 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:08.291 ************************************ 00:12:08.291 START TEST dd_double_output 00:12:08.291 ************************************ 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:12:08.291 [2024-12-05 10:55:35.271865] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.291 00:12:08.291 real 0m0.075s 00:12:08.291 user 0m0.040s 00:12:08.291 sys 0m0.033s 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:12:08.291 ************************************ 00:12:08.291 END TEST dd_double_output 00:12:08.291 ************************************ 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:08.291 ************************************ 00:12:08.291 START TEST dd_no_input 00:12:08.291 ************************************ 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:12:08.291 [2024-12-05 10:55:35.413640] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.291 00:12:08.291 real 0m0.075s 00:12:08.291 user 0m0.036s 00:12:08.291 sys 0m0.038s 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.291 10:55:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:12:08.291 ************************************ 00:12:08.291 END TEST dd_no_input 00:12:08.291 ************************************ 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:08.553 ************************************ 00:12:08.553 START TEST dd_no_output 00:12:08.553 ************************************ 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:12:08.553 [2024-12-05 10:55:35.563645] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.553 00:12:08.553 real 0m0.080s 00:12:08.553 user 0m0.042s 00:12:08.553 sys 0m0.037s 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:12:08.553 ************************************ 00:12:08.553 END TEST dd_no_output 00:12:08.553 ************************************ 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:08.553 ************************************ 00:12:08.553 START TEST dd_wrong_blocksize 00:12:08.553 ************************************ 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.553 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:12:08.553 [2024-12-05 10:55:35.705190] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.812 00:12:08.812 real 0m0.075s 00:12:08.812 user 0m0.045s 00:12:08.812 sys 0m0.028s 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:08.812 ************************************ 00:12:08.812 END TEST dd_wrong_blocksize 00:12:08.812 ************************************ 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:08.812 ************************************ 00:12:08.812 START TEST dd_smaller_blocksize 00:12:08.812 ************************************ 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:12:08.812 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:08.813 10:55:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:12:08.813 [2024-12-05 10:55:35.851636] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:08.813 [2024-12-05 10:55:35.851709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61740 ] 00:12:09.072 [2024-12-05 10:55:36.001971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.072 [2024-12-05 10:55:36.054945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.072 [2024-12-05 10:55:36.096835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.331 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:12:09.590 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:12:09.590 [2024-12-05 10:55:36.702376] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:12:09.590 [2024-12-05 10:55:36.702459] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:09.850 [2024-12-05 10:55:36.802714] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.850 00:12:09.850 real 0m1.074s 00:12:09.850 user 0m0.397s 00:12:09.850 sys 0m0.570s 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:12:09.850 ************************************ 00:12:09.850 END TEST dd_smaller_blocksize 00:12:09.850 ************************************ 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:09.850 ************************************ 00:12:09.850 START TEST dd_invalid_count 00:12:09.850 ************************************ 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:09.850 10:55:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:12:09.850 [2024-12-05 10:55:36.994107] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:12:09.850 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:12:09.850 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.109 00:12:10.109 real 0m0.077s 00:12:10.109 user 0m0.044s 00:12:10.109 sys 0m0.032s 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:12:10.109 ************************************ 00:12:10.109 END TEST dd_invalid_count 00:12:10.109 ************************************ 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:10.109 ************************************ 00:12:10.109 START TEST dd_invalid_oflag 00:12:10.109 ************************************ 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:10.109 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:12:10.110 [2024-12-05 10:55:37.145358] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.110 00:12:10.110 real 0m0.082s 00:12:10.110 user 0m0.047s 00:12:10.110 sys 0m0.035s 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:12:10.110 ************************************ 00:12:10.110 END TEST dd_invalid_oflag 00:12:10.110 ************************************ 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:10.110 ************************************ 00:12:10.110 START TEST dd_invalid_iflag 00:12:10.110 ************************************ 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:10.110 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:12:10.368 [2024-12-05 10:55:37.299222] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.368 00:12:10.368 real 0m0.079s 00:12:10.368 user 0m0.045s 00:12:10.368 sys 0m0.033s 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:12:10.368 ************************************ 00:12:10.368 END TEST dd_invalid_iflag 00:12:10.368 ************************************ 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:10.368 ************************************ 00:12:10.368 START TEST dd_unknown_flag 00:12:10.368 ************************************ 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:10.368 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:12:10.368 [2024-12-05 10:55:37.451476] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:10.369 [2024-12-05 10:55:37.451550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61832 ] 00:12:10.628 [2024-12-05 10:55:37.600146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.628 [2024-12-05 10:55:37.652146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.628 [2024-12-05 10:55:37.693764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:10.628 [2024-12-05 10:55:37.723732] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:12:10.628 [2024-12-05 10:55:37.723788] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:10.628 [2024-12-05 10:55:37.723834] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:12:10.628 [2024-12-05 10:55:37.723845] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:10.628 [2024-12-05 10:55:37.724058] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:12:10.628 [2024-12-05 10:55:37.724071] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:10.628 [2024-12-05 10:55:37.724118] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:10.628 [2024-12-05 10:55:37.724126] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:12:10.894 [2024-12-05 10:55:37.821179] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.894 00:12:10.894 real 0m0.494s 00:12:10.894 user 0m0.259s 00:12:10.894 sys 0m0.142s 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:12:10.894 ************************************ 00:12:10.894 END TEST dd_unknown_flag 00:12:10.894 ************************************ 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:10.894 ************************************ 00:12:10.894 START TEST dd_invalid_json 00:12:10.894 ************************************ 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:10.894 10:55:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:12:10.894 [2024-12-05 10:55:38.021122] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:10.895 [2024-12-05 10:55:38.021199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61866 ] 00:12:11.161 [2024-12-05 10:55:38.170762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.161 [2024-12-05 10:55:38.221972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.161 [2024-12-05 10:55:38.222043] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:12:11.161 [2024-12-05 10:55:38.222056] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:11.161 [2024-12-05 10:55:38.222065] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:11.161 [2024-12-05 10:55:38.222099] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:11.161 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:12:11.161 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:11.161 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:12:11.161 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:12:11.161 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:12:11.161 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:11.161 00:12:11.161 real 0m0.325s 00:12:11.161 user 0m0.147s 00:12:11.161 sys 0m0.077s 00:12:11.161 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.161 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:12:11.161 ************************************ 00:12:11.161 END TEST dd_invalid_json 00:12:11.161 ************************************ 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:11.422 ************************************ 00:12:11.422 START TEST dd_invalid_seek 00:12:11.422 ************************************ 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:11.422 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:12:11.422 { 00:12:11.422 "subsystems": [ 00:12:11.422 { 00:12:11.422 "subsystem": "bdev", 00:12:11.422 "config": [ 00:12:11.422 { 00:12:11.422 "params": { 00:12:11.422 "block_size": 512, 00:12:11.422 "num_blocks": 512, 00:12:11.422 "name": "malloc0" 00:12:11.422 }, 00:12:11.422 "method": "bdev_malloc_create" 00:12:11.422 }, 00:12:11.422 { 00:12:11.422 "params": { 00:12:11.422 "block_size": 512, 00:12:11.422 "num_blocks": 512, 00:12:11.422 "name": "malloc1" 00:12:11.422 }, 00:12:11.422 "method": "bdev_malloc_create" 00:12:11.422 }, 00:12:11.422 { 00:12:11.422 "method": "bdev_wait_for_examine" 00:12:11.422 } 00:12:11.422 ] 00:12:11.422 } 00:12:11.422 ] 00:12:11.422 } 00:12:11.422 [2024-12-05 10:55:38.420871] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:11.422 [2024-12-05 10:55:38.420942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61890 ] 00:12:11.422 [2024-12-05 10:55:38.573755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.681 [2024-12-05 10:55:38.628005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.681 [2024-12-05 10:55:38.670176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:11.681 [2024-12-05 10:55:38.726263] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:12:11.681 [2024-12-05 10:55:38.726316] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:11.681 [2024-12-05 10:55:38.823202] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:11.940 00:12:11.940 real 0m0.529s 00:12:11.940 user 0m0.335s 00:12:11.940 sys 0m0.154s 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.940 ************************************ 00:12:11.940 END TEST dd_invalid_seek 00:12:11.940 ************************************ 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:11.940 ************************************ 00:12:11.940 START TEST dd_invalid_skip 00:12:11.940 ************************************ 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:11.940 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:11.941 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:11.941 10:55:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:12:11.941 [2024-12-05 10:55:39.017474] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:11.941 [2024-12-05 10:55:39.017549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61929 ] 00:12:11.941 { 00:12:11.941 "subsystems": [ 00:12:11.941 { 00:12:11.941 "subsystem": "bdev", 00:12:11.941 "config": [ 00:12:11.941 { 00:12:11.941 "params": { 00:12:11.941 "block_size": 512, 00:12:11.941 "num_blocks": 512, 00:12:11.941 "name": "malloc0" 00:12:11.941 }, 00:12:11.941 "method": "bdev_malloc_create" 00:12:11.941 }, 00:12:11.941 { 00:12:11.941 "params": { 00:12:11.941 "block_size": 512, 00:12:11.941 "num_blocks": 512, 00:12:11.941 "name": "malloc1" 00:12:11.941 }, 00:12:11.941 "method": "bdev_malloc_create" 00:12:11.941 }, 00:12:11.941 { 00:12:11.941 "method": "bdev_wait_for_examine" 00:12:11.941 } 00:12:11.941 ] 00:12:11.941 } 00:12:11.941 ] 00:12:11.941 } 00:12:12.200 [2024-12-05 10:55:39.167185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.200 [2024-12-05 10:55:39.216958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.200 [2024-12-05 10:55:39.258217] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:12.200 [2024-12-05 10:55:39.313196] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:12:12.200 [2024-12-05 10:55:39.313251] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:12.459 [2024-12-05 10:55:39.409443] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.459 ************************************ 00:12:12.459 END TEST dd_invalid_skip 00:12:12.459 ************************************ 00:12:12.459 00:12:12.459 real 0m0.518s 00:12:12.459 user 0m0.340s 00:12:12.459 sys 0m0.146s 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:12.459 ************************************ 00:12:12.459 START TEST dd_invalid_input_count 00:12:12.459 ************************************ 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:12.459 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:12.460 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.460 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:12.460 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.460 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:12.460 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:12.460 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:12.460 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:12.460 10:55:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:12:12.460 [2024-12-05 10:55:39.607820] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:12.460 [2024-12-05 10:55:39.608025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61957 ] 00:12:12.460 { 00:12:12.460 "subsystems": [ 00:12:12.460 { 00:12:12.460 "subsystem": "bdev", 00:12:12.460 "config": [ 00:12:12.460 { 00:12:12.460 "params": { 00:12:12.460 "block_size": 512, 00:12:12.460 "num_blocks": 512, 00:12:12.460 "name": "malloc0" 00:12:12.460 }, 00:12:12.460 "method": "bdev_malloc_create" 00:12:12.460 }, 00:12:12.460 { 00:12:12.460 "params": { 00:12:12.460 "block_size": 512, 00:12:12.460 "num_blocks": 512, 00:12:12.460 "name": "malloc1" 00:12:12.460 }, 00:12:12.460 "method": "bdev_malloc_create" 00:12:12.460 }, 00:12:12.460 { 00:12:12.460 "method": "bdev_wait_for_examine" 00:12:12.460 } 00:12:12.460 ] 00:12:12.460 } 00:12:12.460 ] 00:12:12.460 } 00:12:12.719 [2024-12-05 10:55:39.756943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.719 [2024-12-05 10:55:39.807984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.719 [2024-12-05 10:55:39.850181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:12.979 [2024-12-05 10:55:39.907141] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:12:12.979 [2024-12-05 10:55:39.907200] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:12.979 [2024-12-05 10:55:40.005555] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:12.979 00:12:12.979 real 0m0.524s 00:12:12.979 user 0m0.327s 00:12:12.979 sys 0m0.159s 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.979 ************************************ 00:12:12.979 END TEST dd_invalid_input_count 00:12:12.979 ************************************ 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.979 10:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:12.979 ************************************ 00:12:12.979 START TEST dd_invalid_output_count 00:12:13.238 ************************************ 00:12:13.238 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:12:13.238 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:13.238 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:13.238 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:12:13.238 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:13.238 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:12:13.238 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:13.239 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:12:13.239 { 00:12:13.239 "subsystems": [ 00:12:13.239 { 00:12:13.239 "subsystem": "bdev", 00:12:13.239 "config": [ 00:12:13.239 { 00:12:13.239 "params": { 00:12:13.239 "block_size": 512, 00:12:13.239 "num_blocks": 512, 00:12:13.239 "name": "malloc0" 00:12:13.239 }, 00:12:13.239 "method": "bdev_malloc_create" 00:12:13.239 }, 00:12:13.239 { 00:12:13.239 "method": "bdev_wait_for_examine" 00:12:13.239 } 00:12:13.239 ] 00:12:13.239 } 00:12:13.239 ] 00:12:13.239 } 00:12:13.239 [2024-12-05 10:55:40.202994] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:13.239 [2024-12-05 10:55:40.203185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61998 ] 00:12:13.239 [2024-12-05 10:55:40.354661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.499 [2024-12-05 10:55:40.405479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.499 [2024-12-05 10:55:40.447717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:13.499 [2024-12-05 10:55:40.495036] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:12:13.499 [2024-12-05 10:55:40.495092] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:13.499 [2024-12-05 10:55:40.591550] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:13.499 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:12:13.499 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.499 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:12:13.499 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:12:13.499 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:12:13.499 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.499 00:12:13.499 real 0m0.513s 00:12:13.499 user 0m0.321s 00:12:13.499 sys 0m0.144s 00:12:13.499 ************************************ 00:12:13.499 END TEST dd_invalid_output_count 00:12:13.499 ************************************ 00:12:13.499 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.499 10:55:40 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:13.758 ************************************ 00:12:13.758 START TEST dd_bs_not_multiple 00:12:13.758 ************************************ 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:12:13.758 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:12:13.759 10:55:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:12:13.759 [2024-12-05 10:55:40.790781] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:13.759 [2024-12-05 10:55:40.790858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62028 ] 00:12:13.759 { 00:12:13.759 "subsystems": [ 00:12:13.759 { 00:12:13.759 "subsystem": "bdev", 00:12:13.759 "config": [ 00:12:13.759 { 00:12:13.759 "params": { 00:12:13.759 "block_size": 512, 00:12:13.759 "num_blocks": 512, 00:12:13.759 "name": "malloc0" 00:12:13.759 }, 00:12:13.759 "method": "bdev_malloc_create" 00:12:13.759 }, 00:12:13.759 { 00:12:13.759 "params": { 00:12:13.759 "block_size": 512, 00:12:13.759 "num_blocks": 512, 00:12:13.759 "name": "malloc1" 00:12:13.759 }, 00:12:13.759 "method": "bdev_malloc_create" 00:12:13.759 }, 00:12:13.759 { 00:12:13.759 "method": "bdev_wait_for_examine" 00:12:13.759 } 00:12:13.759 ] 00:12:13.759 } 00:12:13.759 ] 00:12:13.759 } 00:12:14.018 [2024-12-05 10:55:40.939955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.018 [2024-12-05 10:55:40.994367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.018 [2024-12-05 10:55:41.038737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:14.018 [2024-12-05 10:55:41.096711] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:12:14.018 [2024-12-05 10:55:41.096771] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:14.278 [2024-12-05 10:55:41.200630] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:14.278 00:12:14.278 real 0m0.541s 00:12:14.278 user 0m0.344s 00:12:14.278 sys 0m0.155s 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:12:14.278 ************************************ 00:12:14.278 END TEST dd_bs_not_multiple 00:12:14.278 ************************************ 00:12:14.278 ************************************ 00:12:14.278 END TEST spdk_dd_negative 00:12:14.278 ************************************ 00:12:14.278 00:12:14.278 real 0m6.662s 00:12:14.278 user 0m3.317s 00:12:14.278 sys 0m2.818s 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.278 10:55:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:12:14.278 ************************************ 00:12:14.278 END TEST spdk_dd 00:12:14.278 ************************************ 00:12:14.278 00:12:14.278 real 1m12.210s 00:12:14.278 user 0m43.870s 00:12:14.278 sys 0m32.800s 00:12:14.278 10:55:41 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.278 10:55:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:12:14.538 10:55:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:12:14.538 10:55:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:12:14.538 10:55:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:12:14.538 10:55:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.538 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:12:14.538 10:55:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:12:14.538 10:55:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:12:14.538 10:55:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:12:14.538 10:55:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:12:14.538 10:55:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:12:14.538 10:55:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:12:14.538 10:55:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:14.538 10:55:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.538 10:55:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.538 10:55:41 -- common/autotest_common.sh@10 -- # set +x 00:12:14.538 ************************************ 00:12:14.538 START TEST nvmf_tcp 00:12:14.538 ************************************ 00:12:14.538 10:55:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:12:14.538 * Looking for test storage... 00:12:14.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:14.538 10:55:41 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.538 10:55:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.538 10:55:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.797 10:55:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.797 10:55:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:12:14.797 10:55:41 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.798 10:55:41 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:14.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.798 --rc genhtml_branch_coverage=1 00:12:14.798 --rc genhtml_function_coverage=1 00:12:14.798 --rc genhtml_legend=1 00:12:14.798 --rc geninfo_all_blocks=1 00:12:14.798 --rc geninfo_unexecuted_blocks=1 00:12:14.798 00:12:14.798 ' 00:12:14.798 10:55:41 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:14.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.798 --rc genhtml_branch_coverage=1 00:12:14.798 --rc genhtml_function_coverage=1 00:12:14.798 --rc genhtml_legend=1 00:12:14.798 --rc geninfo_all_blocks=1 00:12:14.798 --rc geninfo_unexecuted_blocks=1 00:12:14.798 00:12:14.798 ' 00:12:14.798 10:55:41 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:14.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.798 --rc genhtml_branch_coverage=1 00:12:14.798 --rc genhtml_function_coverage=1 00:12:14.798 --rc genhtml_legend=1 00:12:14.798 --rc geninfo_all_blocks=1 00:12:14.798 --rc geninfo_unexecuted_blocks=1 00:12:14.798 00:12:14.798 ' 00:12:14.798 10:55:41 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:14.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.798 --rc genhtml_branch_coverage=1 00:12:14.798 --rc genhtml_function_coverage=1 00:12:14.798 --rc genhtml_legend=1 00:12:14.798 --rc geninfo_all_blocks=1 00:12:14.798 --rc geninfo_unexecuted_blocks=1 00:12:14.798 00:12:14.798 ' 00:12:14.798 10:55:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:12:14.798 10:55:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:14.798 10:55:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:14.798 10:55:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.798 10:55:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.798 10:55:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.798 ************************************ 00:12:14.798 START TEST nvmf_target_core 00:12:14.798 ************************************ 00:12:14.798 10:55:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:12:14.798 * Looking for test storage... 00:12:14.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:12:14.798 10:55:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.798 10:55:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.798 10:55:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.058 --rc genhtml_branch_coverage=1 00:12:15.058 --rc genhtml_function_coverage=1 00:12:15.058 --rc genhtml_legend=1 00:12:15.058 --rc geninfo_all_blocks=1 00:12:15.058 --rc geninfo_unexecuted_blocks=1 00:12:15.058 00:12:15.058 ' 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.058 --rc genhtml_branch_coverage=1 00:12:15.058 --rc genhtml_function_coverage=1 00:12:15.058 --rc genhtml_legend=1 00:12:15.058 --rc geninfo_all_blocks=1 00:12:15.058 --rc geninfo_unexecuted_blocks=1 00:12:15.058 00:12:15.058 ' 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.058 --rc genhtml_branch_coverage=1 00:12:15.058 --rc genhtml_function_coverage=1 00:12:15.058 --rc genhtml_legend=1 00:12:15.058 --rc geninfo_all_blocks=1 00:12:15.058 --rc geninfo_unexecuted_blocks=1 00:12:15.058 00:12:15.058 ' 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.058 --rc genhtml_branch_coverage=1 00:12:15.058 --rc genhtml_function_coverage=1 00:12:15.058 --rc genhtml_legend=1 00:12:15.058 --rc geninfo_all_blocks=1 00:12:15.058 --rc geninfo_unexecuted_blocks=1 00:12:15.058 00:12:15.058 ' 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:15.058 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:15.059 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:15.059 ************************************ 00:12:15.059 START TEST nvmf_host_management 00:12:15.059 ************************************ 00:12:15.059 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:15.319 * Looking for test storage... 00:12:15.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.319 --rc genhtml_branch_coverage=1 00:12:15.319 --rc genhtml_function_coverage=1 00:12:15.319 --rc genhtml_legend=1 00:12:15.319 --rc geninfo_all_blocks=1 00:12:15.319 --rc geninfo_unexecuted_blocks=1 00:12:15.319 00:12:15.319 ' 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.319 --rc genhtml_branch_coverage=1 00:12:15.319 --rc genhtml_function_coverage=1 00:12:15.319 --rc genhtml_legend=1 00:12:15.319 --rc geninfo_all_blocks=1 00:12:15.319 --rc geninfo_unexecuted_blocks=1 00:12:15.319 00:12:15.319 ' 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.319 --rc genhtml_branch_coverage=1 00:12:15.319 --rc genhtml_function_coverage=1 00:12:15.319 --rc genhtml_legend=1 00:12:15.319 --rc geninfo_all_blocks=1 00:12:15.319 --rc geninfo_unexecuted_blocks=1 00:12:15.319 00:12:15.319 ' 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.319 --rc genhtml_branch_coverage=1 00:12:15.319 --rc genhtml_function_coverage=1 00:12:15.319 --rc genhtml_legend=1 00:12:15.319 --rc geninfo_all_blocks=1 00:12:15.319 --rc geninfo_unexecuted_blocks=1 00:12:15.319 00:12:15.319 ' 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.319 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:15.320 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@280 -- # nvmf_veth_init 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@223 -- # create_target_ns 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # create_main_bridge 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@105 -- # delete_main_bridge 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:12:15.320 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator0 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target0 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0 up 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target0_br 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target0 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:12:15.581 10.0.0.1 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:12:15.581 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:12:15.582 10.0.0.2 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator0 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target0_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:12:15.582 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator1 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:12:15.842 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target1 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1 up 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target1_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target1 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772163 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:12:15.843 10.0.0.3 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772164 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:12:15.843 10.0.0.4 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator1 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target1_br 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 2 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:15.843 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:15.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:12:15.844 00:12:15.844 --- 10.0.0.1 ping statistics --- 00:12:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.844 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:15.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:12:15.844 00:12:15.844 --- 10.0.0.2 ping statistics --- 00:12:15.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.844 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:12:15.844 10:55:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:15.844 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:15.844 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:12:16.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:16.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:12:16.105 00:12:16.105 --- 10.0.0.3 ping statistics --- 00:12:16.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.105 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:12:16.105 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:16.105 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.106 ms 00:12:16.105 00:12:16.105 --- 10.0.0.4 ping statistics --- 00:12:16.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.105 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # return 0 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:16.105 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=62385 00:12:16.106 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 62385 00:12:16.107 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:16.107 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62385 ']' 00:12:16.107 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.107 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.107 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.107 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.107 10:55:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.107 [2024-12-05 10:55:43.259000] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:16.107 [2024-12-05 10:55:43.259077] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.366 [2024-12-05 10:55:43.395645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.366 [2024-12-05 10:55:43.449134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.366 [2024-12-05 10:55:43.449183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.366 [2024-12-05 10:55:43.449193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.366 [2024-12-05 10:55:43.449201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.366 [2024-12-05 10:55:43.449208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.366 [2024-12-05 10:55:43.450051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.366 [2024-12-05 10:55:43.450156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.366 [2024-12-05 10:55:43.450196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:16.366 [2024-12-05 10:55:43.450204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.366 [2024-12-05 10:55:43.514864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.303 [2024-12-05 10:55:44.220482] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.303 Malloc0 00:12:17.303 [2024-12-05 10:55:44.301163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62439 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62439 /var/tmp/bdevperf.sock 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62439 ']' 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:17.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:17.303 { 00:12:17.303 "params": { 00:12:17.303 "name": "Nvme$subsystem", 00:12:17.303 "trtype": "$TEST_TRANSPORT", 00:12:17.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:17.303 "adrfam": "ipv4", 00:12:17.303 "trsvcid": "$NVMF_PORT", 00:12:17.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:17.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:17.303 "hdgst": ${hdgst:-false}, 00:12:17.303 "ddgst": ${ddgst:-false} 00:12:17.303 }, 00:12:17.303 "method": "bdev_nvme_attach_controller" 00:12:17.303 } 00:12:17.303 EOF 00:12:17.303 )") 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:12:17.303 10:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:17.303 "params": { 00:12:17.303 "name": "Nvme0", 00:12:17.303 "trtype": "tcp", 00:12:17.303 "traddr": "10.0.0.2", 00:12:17.303 "adrfam": "ipv4", 00:12:17.303 "trsvcid": "4420", 00:12:17.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:17.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:17.303 "hdgst": false, 00:12:17.303 "ddgst": false 00:12:17.303 }, 00:12:17.303 "method": "bdev_nvme_attach_controller" 00:12:17.303 }' 00:12:17.303 [2024-12-05 10:55:44.425937] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:17.303 [2024-12-05 10:55:44.426010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62439 ] 00:12:17.562 [2024-12-05 10:55:44.576045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.562 [2024-12-05 10:55:44.625788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.562 [2024-12-05 10:55:44.676410] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:17.821 Running I/O for 10 seconds... 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:18.399 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1097 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1097 -ge 100 ']' 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.400 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:18.400 [2024-12-05 10:55:45.391940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.391986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.400 [2024-12-05 10:55:45.392804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.400 [2024-12-05 10:55:45.392816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.392832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.392847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.392860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.392871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.392886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.392901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.392915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.392926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.392939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.392950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.392966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.392978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.392990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:18.401 [2024-12-05 10:55:45.393576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15502d0 is same with the state(6) to be set 00:12:18.401 [2024-12-05 10:55:45.393752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.401 [2024-12-05 10:55:45.393773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.401 [2024-12-05 10:55:45.393802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.401 [2024-12-05 10:55:45.393829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.401 [2024-12-05 10:55:45.393857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.401 [2024-12-05 10:55:45.393869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1555ce0 is same with the state(6) to be set 00:12:18.401 [2024-12-05 10:55:45.394821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:12:18.401 task offset: 24576 on job bdev=Nvme0n1 fails 00:12:18.401 00:12:18.401 Latency(us) 00:12:18.401 [2024-12-05T10:55:45.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.401 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:18.401 Job: Nvme0n1 ended in about 0.60 seconds with error 00:12:18.401 Verification LBA range: start 0x0 length 0x400 00:12:18.401 Nvme0n1 : 0.60 2027.28 126.70 106.70 0.00 29368.23 2342.45 27793.58 00:12:18.401 [2024-12-05T10:55:45.560Z] =================================================================================================================== 00:12:18.401 [2024-12-05T10:55:45.560Z] Total : 2027.28 126.70 106.70 0.00 29368.23 2342.45 27793.58 00:12:18.401 [2024-12-05 10:55:45.396575] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:18.401 [2024-12-05 10:55:45.396606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1555ce0 (9): Bad file descriptor 00:12:18.401 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.401 10:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:18.401 [2024-12-05 10:55:45.401305] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62439 00:12:19.334 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62439) - No such process 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:19.334 { 00:12:19.334 "params": { 00:12:19.334 "name": "Nvme$subsystem", 00:12:19.334 "trtype": "$TEST_TRANSPORT", 00:12:19.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:19.334 "adrfam": "ipv4", 00:12:19.334 "trsvcid": "$NVMF_PORT", 00:12:19.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:19.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:19.334 "hdgst": ${hdgst:-false}, 00:12:19.334 "ddgst": ${ddgst:-false} 00:12:19.334 }, 00:12:19.334 "method": "bdev_nvme_attach_controller" 00:12:19.334 } 00:12:19.334 EOF 00:12:19.334 )") 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:12:19.334 10:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:19.334 "params": { 00:12:19.334 "name": "Nvme0", 00:12:19.334 "trtype": "tcp", 00:12:19.334 "traddr": "10.0.0.2", 00:12:19.334 "adrfam": "ipv4", 00:12:19.334 "trsvcid": "4420", 00:12:19.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:19.334 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:19.334 "hdgst": false, 00:12:19.334 "ddgst": false 00:12:19.334 }, 00:12:19.334 "method": "bdev_nvme_attach_controller" 00:12:19.334 }' 00:12:19.334 [2024-12-05 10:55:46.461448] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:19.334 [2024-12-05 10:55:46.461521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62477 ] 00:12:19.591 [2024-12-05 10:55:46.615309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.591 [2024-12-05 10:55:46.665605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.591 [2024-12-05 10:55:46.715831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:19.849 Running I/O for 1 seconds... 00:12:20.783 1984.00 IOPS, 124.00 MiB/s 00:12:20.783 Latency(us) 00:12:20.783 [2024-12-05T10:55:47.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.783 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:20.783 Verification LBA range: start 0x0 length 0x400 00:12:20.783 Nvme0n1 : 1.01 2020.37 126.27 0.00 0.00 31173.91 3316.28 29267.48 00:12:20.783 [2024-12-05T10:55:47.942Z] =================================================================================================================== 00:12:20.783 [2024-12-05T10:55:47.942Z] Total : 2020.37 126.27 0.00 0.00 31173.91 3316.28 29267.48 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:21.041 rmmod nvme_tcp 00:12:21.041 rmmod nvme_fabrics 00:12:21.041 rmmod nvme_keyring 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 62385 ']' 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 62385 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62385 ']' 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62385 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.041 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62385 00:12:21.299 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:21.299 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:21.299 killing process with pid 62385 00:12:21.299 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62385' 00:12:21.299 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62385 00:12:21.299 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62385 00:12:21.558 [2024-12-05 10:55:48.502582] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:12:21.558 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:12:21.559 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:12:21.559 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:12:21.559 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:12:21.559 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:21.820 00:12:21.820 real 0m6.611s 00:12:21.820 user 0m22.792s 00:12:21.820 sys 0m1.923s 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:21.820 ************************************ 00:12:21.820 END TEST nvmf_host_management 00:12:21.820 ************************************ 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:21.820 ************************************ 00:12:21.820 START TEST nvmf_lvol 00:12:21.820 ************************************ 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:21.820 * Looking for test storage... 00:12:21.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:12:21.820 10:55:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:22.080 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:22.080 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.080 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.080 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:22.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.081 --rc genhtml_branch_coverage=1 00:12:22.081 --rc genhtml_function_coverage=1 00:12:22.081 --rc genhtml_legend=1 00:12:22.081 --rc geninfo_all_blocks=1 00:12:22.081 --rc geninfo_unexecuted_blocks=1 00:12:22.081 00:12:22.081 ' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:22.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.081 --rc genhtml_branch_coverage=1 00:12:22.081 --rc genhtml_function_coverage=1 00:12:22.081 --rc genhtml_legend=1 00:12:22.081 --rc geninfo_all_blocks=1 00:12:22.081 --rc geninfo_unexecuted_blocks=1 00:12:22.081 00:12:22.081 ' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:22.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.081 --rc genhtml_branch_coverage=1 00:12:22.081 --rc genhtml_function_coverage=1 00:12:22.081 --rc genhtml_legend=1 00:12:22.081 --rc geninfo_all_blocks=1 00:12:22.081 --rc geninfo_unexecuted_blocks=1 00:12:22.081 00:12:22.081 ' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:22.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.081 --rc genhtml_branch_coverage=1 00:12:22.081 --rc genhtml_function_coverage=1 00:12:22.081 --rc genhtml_legend=1 00:12:22.081 --rc geninfo_all_blocks=1 00:12:22.081 --rc geninfo_unexecuted_blocks=1 00:12:22.081 00:12:22.081 ' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:22.081 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:22.081 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@280 -- # nvmf_veth_init 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@223 -- # create_target_ns 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # create_main_bridge 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@105 -- # delete_main_bridge 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator0 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target0 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0 up 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target0_br 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target0 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:12:22.082 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:12:22.083 10.0.0.1 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:12:22.083 10.0.0.2 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator0 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:12:22.083 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target0_br 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:12:22.342 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator1 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target1 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1 up 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target1 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772163 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:12:22.343 10.0.0.3 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772164 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:12:22.343 10.0.0.4 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator1 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target1_br 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 2 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:22.343 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:22.603 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:22.603 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:22.603 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:22.603 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:22.603 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:12:22.603 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:22.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:12:22.604 00:12:22.604 --- 10.0.0.1 ping statistics --- 00:12:22.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.604 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:22.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:12:22.604 00:12:22.604 --- 10.0.0.2 ping statistics --- 00:12:22.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.604 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:12:22.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:22.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.141 ms 00:12:22.604 00:12:22.604 --- 10.0.0.3 ping statistics --- 00:12:22.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.604 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:12:22.604 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:22.604 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.188 ms 00:12:22.604 00:12:22.604 --- 10.0.0.4 ping statistics --- 00:12:22.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.604 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # return 0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:22.604 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:22.605 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=62745 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 62745 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62745 ']' 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.864 10:55:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:22.864 [2024-12-05 10:55:49.839789] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:22.864 [2024-12-05 10:55:49.840020] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.865 [2024-12-05 10:55:49.993744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.123 [2024-12-05 10:55:50.045956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.123 [2024-12-05 10:55:50.045999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.123 [2024-12-05 10:55:50.046009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.123 [2024-12-05 10:55:50.046017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.123 [2024-12-05 10:55:50.046024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.123 [2024-12-05 10:55:50.046935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.123 [2024-12-05 10:55:50.047022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.123 [2024-12-05 10:55:50.047024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.123 [2024-12-05 10:55:50.089477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:23.688 10:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.688 10:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:12:23.688 10:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:23.688 10:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:23.688 10:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:23.688 10:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.688 10:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:23.946 [2024-12-05 10:55:50.987889] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.946 10:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.206 10:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:24.206 10:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:24.463 10:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:24.463 10:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:24.720 10:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:25.019 10:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e0674f43-9545-4866-9a9f-93e16779199f 00:12:25.019 10:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e0674f43-9545-4866-9a9f-93e16779199f lvol 20 00:12:25.279 10:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=36b63b0c-c189-4a50-b937-6c4439d86a46 00:12:25.279 10:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:25.538 10:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36b63b0c-c189-4a50-b937-6c4439d86a46 00:12:25.797 10:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:26.057 [2024-12-05 10:55:52.964045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.057 10:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:26.057 10:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:26.057 10:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62821 00:12:26.057 10:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:27.436 10:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 36b63b0c-c189-4a50-b937-6c4439d86a46 MY_SNAPSHOT 00:12:27.436 10:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=246fed8d-738b-417c-aa55-6e01db298231 00:12:27.436 10:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 36b63b0c-c189-4a50-b937-6c4439d86a46 30 00:12:27.694 10:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 246fed8d-738b-417c-aa55-6e01db298231 MY_CLONE 00:12:27.953 10:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=20e0a204-6064-4dea-9163-b27b2fa55bc5 00:12:27.953 10:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 20e0a204-6064-4dea-9163-b27b2fa55bc5 00:12:28.517 10:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62821 00:12:36.647 Initializing NVMe Controllers 00:12:36.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:36.647 Controller IO queue size 128, less than required. 00:12:36.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:36.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:36.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:36.647 Initialization complete. Launching workers. 00:12:36.647 ======================================================== 00:12:36.647 Latency(us) 00:12:36.647 Device Information : IOPS MiB/s Average min max 00:12:36.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11361.85 44.38 11272.57 462.17 91946.72 00:12:36.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11283.55 44.08 11346.54 1623.40 53291.20 00:12:36.647 ======================================================== 00:12:36.647 Total : 22645.40 88.46 11309.43 462.17 91946.72 00:12:36.647 00:12:36.647 10:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:36.647 10:56:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 36b63b0c-c189-4a50-b937-6c4439d86a46 00:12:36.906 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e0674f43-9545-4866-9a9f-93e16779199f 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:37.165 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:37.165 rmmod nvme_tcp 00:12:37.165 rmmod nvme_fabrics 00:12:37.424 rmmod nvme_keyring 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 62745 ']' 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 62745 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62745 ']' 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62745 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62745 00:12:37.424 killing process with pid 62745 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62745' 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62745 00:12:37.424 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62745 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:37.683 ************************************ 00:12:37.683 END TEST nvmf_lvol 00:12:37.683 ************************************ 00:12:37.683 00:12:37.683 real 0m16.022s 00:12:37.683 user 1m2.926s 00:12:37.683 sys 0m5.964s 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.683 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:37.941 10:56:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:37.941 10:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.941 10:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.941 10:56:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:37.941 ************************************ 00:12:37.941 START TEST nvmf_lvs_grow 00:12:37.941 ************************************ 00:12:37.941 10:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:37.941 * Looking for test storage... 00:12:37.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:37.941 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:37.941 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:12:37.941 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:38.200 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:38.200 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.200 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.200 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.200 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:38.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.201 --rc genhtml_branch_coverage=1 00:12:38.201 --rc genhtml_function_coverage=1 00:12:38.201 --rc genhtml_legend=1 00:12:38.201 --rc geninfo_all_blocks=1 00:12:38.201 --rc geninfo_unexecuted_blocks=1 00:12:38.201 00:12:38.201 ' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:38.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.201 --rc genhtml_branch_coverage=1 00:12:38.201 --rc genhtml_function_coverage=1 00:12:38.201 --rc genhtml_legend=1 00:12:38.201 --rc geninfo_all_blocks=1 00:12:38.201 --rc geninfo_unexecuted_blocks=1 00:12:38.201 00:12:38.201 ' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:38.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.201 --rc genhtml_branch_coverage=1 00:12:38.201 --rc genhtml_function_coverage=1 00:12:38.201 --rc genhtml_legend=1 00:12:38.201 --rc geninfo_all_blocks=1 00:12:38.201 --rc geninfo_unexecuted_blocks=1 00:12:38.201 00:12:38.201 ' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:38.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.201 --rc genhtml_branch_coverage=1 00:12:38.201 --rc genhtml_function_coverage=1 00:12:38.201 --rc genhtml_legend=1 00:12:38.201 --rc geninfo_all_blocks=1 00:12:38.201 --rc geninfo_unexecuted_blocks=1 00:12:38.201 00:12:38.201 ' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:38.201 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:38.201 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@280 -- # nvmf_veth_init 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@223 -- # create_target_ns 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # create_main_bridge 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@105 -- # delete_main_bridge 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator0 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target0 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0 up 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target0_br 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target0 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:12:38.202 10.0.0.1 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:12:38.202 10.0.0.2 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator0 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:12:38.202 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.203 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:12:38.203 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:12:38.203 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:12:38.203 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:12:38.203 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.203 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.203 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:12:38.203 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target0_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator1 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target1 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1 up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target1 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772163 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:12:38.462 10.0.0.3 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772164 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:12:38.462 10.0.0.4 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator1 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:12:38.462 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.463 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:12:38.463 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:12:38.463 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:12:38.463 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:12:38.463 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:12:38.463 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target1_br 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 2 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.766 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:38.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:12:38.767 00:12:38.767 --- 10.0.0.1 ping statistics --- 00:12:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.767 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:38.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:12:38.767 00:12:38.767 --- 10.0.0.2 ping statistics --- 00:12:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.767 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:12:38.767 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:38.767 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.154 ms 00:12:38.767 00:12:38.767 --- 10.0.0.3 ping statistics --- 00:12:38.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.767 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:12:38.767 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:12:38.768 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:38.768 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:12:38.768 00:12:38.768 --- 10.0.0.4 ping statistics --- 00:12:38.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.768 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # return 0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:12:38.768 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.769 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=63198 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 63198 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63198 ']' 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.032 10:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:39.032 [2024-12-05 10:56:05.988054] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:39.032 [2024-12-05 10:56:05.988137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.032 [2024-12-05 10:56:06.126842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.032 [2024-12-05 10:56:06.182732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.032 [2024-12-05 10:56:06.182790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.032 [2024-12-05 10:56:06.182802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.032 [2024-12-05 10:56:06.182811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.032 [2024-12-05 10:56:06.182819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.032 [2024-12-05 10:56:06.183118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.290 [2024-12-05 10:56:06.227373] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:39.856 10:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.856 10:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:39.856 10:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:39.856 10:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:39.856 10:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:39.856 10:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.856 10:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:40.114 [2024-12-05 10:56:07.195168] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.114 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:40.114 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:40.115 ************************************ 00:12:40.115 START TEST lvs_grow_clean 00:12:40.115 ************************************ 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:40.115 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:40.682 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:40.682 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:40.682 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:40.682 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:40.682 10:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:40.941 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:40.941 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:40.941 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1e425004-8e58-4c94-9545-2c477c0e3b2a lvol 150 00:12:41.200 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4e7c8248-c42b-4fe0-b661-4661637924e2 00:12:41.200 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:41.200 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:41.459 [2024-12-05 10:56:08.566082] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:41.459 [2024-12-05 10:56:08.566149] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:41.459 true 00:12:41.459 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:41.459 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:41.719 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:41.719 10:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:41.978 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e7c8248-c42b-4fe0-b661-4661637924e2 00:12:42.237 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:42.498 [2024-12-05 10:56:09.509051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.498 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63286 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63286 /var/tmp/bdevperf.sock 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63286 ']' 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.811 10:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:42.811 [2024-12-05 10:56:09.784639] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:12:42.811 [2024-12-05 10:56:09.784850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63286 ] 00:12:42.811 [2024-12-05 10:56:09.938404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.109 [2024-12-05 10:56:09.990649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.109 [2024-12-05 10:56:10.033408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:43.676 10:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.676 10:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:43.676 10:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:43.935 Nvme0n1 00:12:43.935 10:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:44.194 [ 00:12:44.194 { 00:12:44.194 "name": "Nvme0n1", 00:12:44.194 "aliases": [ 00:12:44.194 "4e7c8248-c42b-4fe0-b661-4661637924e2" 00:12:44.194 ], 00:12:44.194 "product_name": "NVMe disk", 00:12:44.194 "block_size": 4096, 00:12:44.194 "num_blocks": 38912, 00:12:44.194 "uuid": "4e7c8248-c42b-4fe0-b661-4661637924e2", 00:12:44.194 "numa_id": -1, 00:12:44.194 "assigned_rate_limits": { 00:12:44.194 "rw_ios_per_sec": 0, 00:12:44.194 "rw_mbytes_per_sec": 0, 00:12:44.194 "r_mbytes_per_sec": 0, 00:12:44.194 "w_mbytes_per_sec": 0 00:12:44.194 }, 00:12:44.194 "claimed": false, 00:12:44.194 "zoned": false, 00:12:44.194 "supported_io_types": { 00:12:44.194 "read": true, 00:12:44.194 "write": true, 00:12:44.194 "unmap": true, 00:12:44.194 "flush": true, 00:12:44.194 "reset": true, 00:12:44.194 "nvme_admin": true, 00:12:44.194 "nvme_io": true, 00:12:44.194 "nvme_io_md": false, 00:12:44.194 "write_zeroes": true, 00:12:44.194 "zcopy": false, 00:12:44.194 "get_zone_info": false, 00:12:44.194 "zone_management": false, 00:12:44.194 "zone_append": false, 00:12:44.194 "compare": true, 00:12:44.194 "compare_and_write": true, 00:12:44.194 "abort": true, 00:12:44.194 "seek_hole": false, 00:12:44.194 "seek_data": false, 00:12:44.194 "copy": true, 00:12:44.194 "nvme_iov_md": false 00:12:44.194 }, 00:12:44.194 "memory_domains": [ 00:12:44.194 { 00:12:44.194 "dma_device_id": "system", 00:12:44.194 "dma_device_type": 1 00:12:44.194 } 00:12:44.194 ], 00:12:44.194 "driver_specific": { 00:12:44.194 "nvme": [ 00:12:44.194 { 00:12:44.194 "trid": { 00:12:44.194 "trtype": "TCP", 00:12:44.194 "adrfam": "IPv4", 00:12:44.194 "traddr": "10.0.0.2", 00:12:44.194 "trsvcid": "4420", 00:12:44.194 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:44.194 }, 00:12:44.194 "ctrlr_data": { 00:12:44.194 "cntlid": 1, 00:12:44.194 "vendor_id": "0x8086", 00:12:44.194 "model_number": "SPDK bdev Controller", 00:12:44.194 "serial_number": "SPDK0", 00:12:44.194 "firmware_revision": "25.01", 00:12:44.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:44.194 "oacs": { 00:12:44.194 "security": 0, 00:12:44.194 "format": 0, 00:12:44.194 "firmware": 0, 00:12:44.194 "ns_manage": 0 00:12:44.194 }, 00:12:44.194 "multi_ctrlr": true, 00:12:44.194 "ana_reporting": false 00:12:44.194 }, 00:12:44.194 "vs": { 00:12:44.194 "nvme_version": "1.3" 00:12:44.194 }, 00:12:44.194 "ns_data": { 00:12:44.194 "id": 1, 00:12:44.194 "can_share": true 00:12:44.194 } 00:12:44.194 } 00:12:44.194 ], 00:12:44.194 "mp_policy": "active_passive" 00:12:44.194 } 00:12:44.194 } 00:12:44.194 ] 00:12:44.194 10:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63304 00:12:44.194 10:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:44.194 10:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:44.194 Running I/O for 10 seconds... 00:12:45.131 Latency(us) 00:12:45.131 [2024-12-05T10:56:12.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.131 Nvme0n1 : 1.00 9407.00 36.75 0.00 0.00 0.00 0.00 0.00 00:12:45.131 [2024-12-05T10:56:12.290Z] =================================================================================================================== 00:12:45.131 [2024-12-05T10:56:12.290Z] Total : 9407.00 36.75 0.00 0.00 0.00 0.00 0.00 00:12:45.131 00:12:46.067 10:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:46.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.326 Nvme0n1 : 2.00 9593.00 37.47 0.00 0.00 0.00 0.00 0.00 00:12:46.326 [2024-12-05T10:56:13.485Z] =================================================================================================================== 00:12:46.326 [2024-12-05T10:56:13.485Z] Total : 9593.00 37.47 0.00 0.00 0.00 0.00 0.00 00:12:46.326 00:12:46.326 true 00:12:46.326 10:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:46.326 10:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:46.584 10:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:46.584 10:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:46.584 10:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63304 00:12:47.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.153 Nvme0n1 : 3.00 9600.67 37.50 0.00 0.00 0.00 0.00 0.00 00:12:47.153 [2024-12-05T10:56:14.312Z] =================================================================================================================== 00:12:47.153 [2024-12-05T10:56:14.312Z] Total : 9600.67 37.50 0.00 0.00 0.00 0.00 0.00 00:12:47.153 00:12:48.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.545 Nvme0n1 : 4.00 9345.75 36.51 0.00 0.00 0.00 0.00 0.00 00:12:48.545 [2024-12-05T10:56:15.704Z] =================================================================================================================== 00:12:48.545 [2024-12-05T10:56:15.704Z] Total : 9345.75 36.51 0.00 0.00 0.00 0.00 0.00 00:12:48.545 00:12:49.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.112 Nvme0n1 : 5.00 9305.40 36.35 0.00 0.00 0.00 0.00 0.00 00:12:49.112 [2024-12-05T10:56:16.271Z] =================================================================================================================== 00:12:49.112 [2024-12-05T10:56:16.271Z] Total : 9305.40 36.35 0.00 0.00 0.00 0.00 0.00 00:12:49.112 00:12:50.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.488 Nvme0n1 : 6.00 9272.67 36.22 0.00 0.00 0.00 0.00 0.00 00:12:50.488 [2024-12-05T10:56:17.647Z] =================================================================================================================== 00:12:50.488 [2024-12-05T10:56:17.647Z] Total : 9272.67 36.22 0.00 0.00 0.00 0.00 0.00 00:12:50.488 00:12:51.422 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.422 Nvme0n1 : 7.00 9236.14 36.08 0.00 0.00 0.00 0.00 0.00 00:12:51.422 [2024-12-05T10:56:18.581Z] =================================================================================================================== 00:12:51.422 [2024-12-05T10:56:18.581Z] Total : 9236.14 36.08 0.00 0.00 0.00 0.00 0.00 00:12:51.422 00:12:52.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.442 Nvme0n1 : 8.00 9222.62 36.03 0.00 0.00 0.00 0.00 0.00 00:12:52.442 [2024-12-05T10:56:19.601Z] =================================================================================================================== 00:12:52.442 [2024-12-05T10:56:19.601Z] Total : 9222.62 36.03 0.00 0.00 0.00 0.00 0.00 00:12:52.442 00:12:53.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.377 Nvme0n1 : 9.00 9143.33 35.72 0.00 0.00 0.00 0.00 0.00 00:12:53.377 [2024-12-05T10:56:20.536Z] =================================================================================================================== 00:12:53.377 [2024-12-05T10:56:20.536Z] Total : 9143.33 35.72 0.00 0.00 0.00 0.00 0.00 00:12:53.377 00:12:54.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.346 Nvme0n1 : 10.00 9067.20 35.42 0.00 0.00 0.00 0.00 0.00 00:12:54.346 [2024-12-05T10:56:21.505Z] =================================================================================================================== 00:12:54.346 [2024-12-05T10:56:21.505Z] Total : 9067.20 35.42 0.00 0.00 0.00 0.00 0.00 00:12:54.346 00:12:54.346 00:12:54.346 Latency(us) 00:12:54.346 [2024-12-05T10:56:21.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.346 Nvme0n1 : 10.01 9070.95 35.43 0.00 0.00 14106.99 5290.26 104436.49 00:12:54.346 [2024-12-05T10:56:21.505Z] =================================================================================================================== 00:12:54.346 [2024-12-05T10:56:21.505Z] Total : 9070.95 35.43 0.00 0.00 14106.99 5290.26 104436.49 00:12:54.346 { 00:12:54.346 "results": [ 00:12:54.346 { 00:12:54.346 "job": "Nvme0n1", 00:12:54.346 "core_mask": "0x2", 00:12:54.346 "workload": "randwrite", 00:12:54.346 "status": "finished", 00:12:54.346 "queue_depth": 128, 00:12:54.346 "io_size": 4096, 00:12:54.346 "runtime": 10.009973, 00:12:54.346 "iops": 9070.953538036516, 00:12:54.346 "mibps": 35.43341225795514, 00:12:54.346 "io_failed": 0, 00:12:54.346 "io_timeout": 0, 00:12:54.346 "avg_latency_us": 14106.988794225359, 00:12:54.346 "min_latency_us": 5290.255421686747, 00:12:54.346 "max_latency_us": 104436.48514056225 00:12:54.346 } 00:12:54.346 ], 00:12:54.346 "core_count": 1 00:12:54.346 } 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63286 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63286 ']' 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63286 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63286 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63286' 00:12:54.346 killing process with pid 63286 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63286 00:12:54.346 Received shutdown signal, test time was about 10.000000 seconds 00:12:54.346 00:12:54.346 Latency(us) 00:12:54.346 [2024-12-05T10:56:21.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.346 [2024-12-05T10:56:21.505Z] =================================================================================================================== 00:12:54.346 [2024-12-05T10:56:21.505Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:54.346 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63286 00:12:54.604 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:54.604 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:54.863 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:54.863 10:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:55.121 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:55.121 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:55.121 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:55.379 [2024-12-05 10:56:22.357421] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:55.379 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:55.638 request: 00:12:55.638 { 00:12:55.638 "uuid": "1e425004-8e58-4c94-9545-2c477c0e3b2a", 00:12:55.638 "method": "bdev_lvol_get_lvstores", 00:12:55.638 "req_id": 1 00:12:55.638 } 00:12:55.638 Got JSON-RPC error response 00:12:55.638 response: 00:12:55.638 { 00:12:55.638 "code": -19, 00:12:55.638 "message": "No such device" 00:12:55.638 } 00:12:55.638 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:55.638 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:55.638 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:55.638 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:55.638 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:55.896 aio_bdev 00:12:55.896 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4e7c8248-c42b-4fe0-b661-4661637924e2 00:12:55.896 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4e7c8248-c42b-4fe0-b661-4661637924e2 00:12:55.896 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.896 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:55.896 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.896 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.896 10:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:55.896 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e7c8248-c42b-4fe0-b661-4661637924e2 -t 2000 00:12:56.155 [ 00:12:56.155 { 00:12:56.155 "name": "4e7c8248-c42b-4fe0-b661-4661637924e2", 00:12:56.155 "aliases": [ 00:12:56.155 "lvs/lvol" 00:12:56.155 ], 00:12:56.155 "product_name": "Logical Volume", 00:12:56.155 "block_size": 4096, 00:12:56.155 "num_blocks": 38912, 00:12:56.155 "uuid": "4e7c8248-c42b-4fe0-b661-4661637924e2", 00:12:56.155 "assigned_rate_limits": { 00:12:56.155 "rw_ios_per_sec": 0, 00:12:56.155 "rw_mbytes_per_sec": 0, 00:12:56.155 "r_mbytes_per_sec": 0, 00:12:56.155 "w_mbytes_per_sec": 0 00:12:56.155 }, 00:12:56.155 "claimed": false, 00:12:56.155 "zoned": false, 00:12:56.155 "supported_io_types": { 00:12:56.155 "read": true, 00:12:56.155 "write": true, 00:12:56.155 "unmap": true, 00:12:56.155 "flush": false, 00:12:56.155 "reset": true, 00:12:56.155 "nvme_admin": false, 00:12:56.155 "nvme_io": false, 00:12:56.155 "nvme_io_md": false, 00:12:56.155 "write_zeroes": true, 00:12:56.155 "zcopy": false, 00:12:56.155 "get_zone_info": false, 00:12:56.155 "zone_management": false, 00:12:56.155 "zone_append": false, 00:12:56.155 "compare": false, 00:12:56.155 "compare_and_write": false, 00:12:56.155 "abort": false, 00:12:56.155 "seek_hole": true, 00:12:56.155 "seek_data": true, 00:12:56.155 "copy": false, 00:12:56.155 "nvme_iov_md": false 00:12:56.155 }, 00:12:56.155 "driver_specific": { 00:12:56.155 "lvol": { 00:12:56.156 "lvol_store_uuid": "1e425004-8e58-4c94-9545-2c477c0e3b2a", 00:12:56.156 "base_bdev": "aio_bdev", 00:12:56.156 "thin_provision": false, 00:12:56.156 "num_allocated_clusters": 38, 00:12:56.156 "snapshot": false, 00:12:56.156 "clone": false, 00:12:56.156 "esnap_clone": false 00:12:56.156 } 00:12:56.156 } 00:12:56.156 } 00:12:56.156 ] 00:12:56.156 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:56.156 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:56.156 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:56.415 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:56.415 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:56.415 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:56.675 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:56.675 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4e7c8248-c42b-4fe0-b661-4661637924e2 00:12:56.934 10:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e425004-8e58-4c94-9545-2c477c0e3b2a 00:12:57.193 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:57.193 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:57.760 ************************************ 00:12:57.760 END TEST lvs_grow_clean 00:12:57.760 ************************************ 00:12:57.760 00:12:57.760 real 0m17.465s 00:12:57.760 user 0m15.429s 00:12:57.760 sys 0m3.223s 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:57.760 ************************************ 00:12:57.760 START TEST lvs_grow_dirty 00:12:57.760 ************************************ 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:57.760 10:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:58.019 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:58.019 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:58.276 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=80338e96-de14-48c7-ba3e-1c3ab873de20 00:12:58.276 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:12:58.276 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:58.534 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:58.534 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:58.534 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 80338e96-de14-48c7-ba3e-1c3ab873de20 lvol 150 00:12:58.534 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=de04da46-053b-40de-8221-932ce0db61f5 00:12:58.534 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:58.534 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:58.793 [2024-12-05 10:56:25.862880] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:58.793 [2024-12-05 10:56:25.862949] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:58.793 true 00:12:58.793 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:12:58.793 10:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:59.052 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:59.052 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:59.312 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 de04da46-053b-40de-8221-932ce0db61f5 00:12:59.571 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:59.830 [2024-12-05 10:56:26.746577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.830 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:59.830 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63545 00:12:59.830 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:59.830 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:59.830 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63545 /var/tmp/bdevperf.sock 00:12:59.830 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63545 ']' 00:12:59.831 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:59.831 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.831 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:59.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:59.831 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.831 10:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:00.090 [2024-12-05 10:56:27.008083] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:00.090 [2024-12-05 10:56:27.008323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63545 ] 00:13:00.090 [2024-12-05 10:56:27.160668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.090 [2024-12-05 10:56:27.206481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.090 [2024-12-05 10:56:27.249761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:01.027 10:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.027 10:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:01.027 10:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:01.027 Nvme0n1 00:13:01.027 10:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:01.286 [ 00:13:01.286 { 00:13:01.286 "name": "Nvme0n1", 00:13:01.286 "aliases": [ 00:13:01.286 "de04da46-053b-40de-8221-932ce0db61f5" 00:13:01.286 ], 00:13:01.286 "product_name": "NVMe disk", 00:13:01.286 "block_size": 4096, 00:13:01.286 "num_blocks": 38912, 00:13:01.286 "uuid": "de04da46-053b-40de-8221-932ce0db61f5", 00:13:01.286 "numa_id": -1, 00:13:01.286 "assigned_rate_limits": { 00:13:01.286 "rw_ios_per_sec": 0, 00:13:01.286 "rw_mbytes_per_sec": 0, 00:13:01.286 "r_mbytes_per_sec": 0, 00:13:01.286 "w_mbytes_per_sec": 0 00:13:01.286 }, 00:13:01.286 "claimed": false, 00:13:01.286 "zoned": false, 00:13:01.286 "supported_io_types": { 00:13:01.286 "read": true, 00:13:01.286 "write": true, 00:13:01.286 "unmap": true, 00:13:01.286 "flush": true, 00:13:01.286 "reset": true, 00:13:01.286 "nvme_admin": true, 00:13:01.286 "nvme_io": true, 00:13:01.286 "nvme_io_md": false, 00:13:01.286 "write_zeroes": true, 00:13:01.286 "zcopy": false, 00:13:01.286 "get_zone_info": false, 00:13:01.286 "zone_management": false, 00:13:01.286 "zone_append": false, 00:13:01.286 "compare": true, 00:13:01.286 "compare_and_write": true, 00:13:01.286 "abort": true, 00:13:01.286 "seek_hole": false, 00:13:01.286 "seek_data": false, 00:13:01.286 "copy": true, 00:13:01.286 "nvme_iov_md": false 00:13:01.287 }, 00:13:01.287 "memory_domains": [ 00:13:01.287 { 00:13:01.287 "dma_device_id": "system", 00:13:01.287 "dma_device_type": 1 00:13:01.287 } 00:13:01.287 ], 00:13:01.287 "driver_specific": { 00:13:01.287 "nvme": [ 00:13:01.287 { 00:13:01.287 "trid": { 00:13:01.287 "trtype": "TCP", 00:13:01.287 "adrfam": "IPv4", 00:13:01.287 "traddr": "10.0.0.2", 00:13:01.287 "trsvcid": "4420", 00:13:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:01.287 }, 00:13:01.287 "ctrlr_data": { 00:13:01.287 "cntlid": 1, 00:13:01.287 "vendor_id": "0x8086", 00:13:01.287 "model_number": "SPDK bdev Controller", 00:13:01.287 "serial_number": "SPDK0", 00:13:01.287 "firmware_revision": "25.01", 00:13:01.287 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:01.287 "oacs": { 00:13:01.287 "security": 0, 00:13:01.287 "format": 0, 00:13:01.287 "firmware": 0, 00:13:01.287 "ns_manage": 0 00:13:01.287 }, 00:13:01.287 "multi_ctrlr": true, 00:13:01.287 "ana_reporting": false 00:13:01.287 }, 00:13:01.287 "vs": { 00:13:01.287 "nvme_version": "1.3" 00:13:01.287 }, 00:13:01.287 "ns_data": { 00:13:01.287 "id": 1, 00:13:01.287 "can_share": true 00:13:01.287 } 00:13:01.287 } 00:13:01.287 ], 00:13:01.287 "mp_policy": "active_passive" 00:13:01.287 } 00:13:01.287 } 00:13:01.287 ] 00:13:01.287 10:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63569 00:13:01.287 10:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:01.287 10:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:01.547 Running I/O for 10 seconds... 00:13:02.485 Latency(us) 00:13:02.485 [2024-12-05T10:56:29.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.485 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:13:02.485 [2024-12-05T10:56:29.644Z] =================================================================================================================== 00:13:02.485 [2024-12-05T10:56:29.644Z] Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:13:02.485 00:13:03.428 10:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:03.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.428 Nvme0n1 : 2.00 10126.50 39.56 0.00 0.00 0.00 0.00 0.00 00:13:03.428 [2024-12-05T10:56:30.587Z] =================================================================================================================== 00:13:03.428 [2024-12-05T10:56:30.587Z] Total : 10126.50 39.56 0.00 0.00 0.00 0.00 0.00 00:13:03.428 00:13:03.686 true 00:13:03.686 10:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:03.686 10:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:03.944 10:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:03.944 10:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:03.944 10:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63569 00:13:04.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.509 Nvme0n1 : 3.00 9883.67 38.61 0.00 0.00 0.00 0.00 0.00 00:13:04.509 [2024-12-05T10:56:31.668Z] =================================================================================================================== 00:13:04.509 [2024-12-05T10:56:31.668Z] Total : 9883.67 38.61 0.00 0.00 0.00 0.00 0.00 00:13:04.509 00:13:05.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.442 Nvme0n1 : 4.00 9821.50 38.37 0.00 0.00 0.00 0.00 0.00 00:13:05.442 [2024-12-05T10:56:32.601Z] =================================================================================================================== 00:13:05.442 [2024-12-05T10:56:32.601Z] Total : 9821.50 38.37 0.00 0.00 0.00 0.00 0.00 00:13:05.442 00:13:06.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.376 Nvme0n1 : 5.00 9755.20 38.11 0.00 0.00 0.00 0.00 0.00 00:13:06.376 [2024-12-05T10:56:33.535Z] =================================================================================================================== 00:13:06.376 [2024-12-05T10:56:33.535Z] Total : 9755.20 38.11 0.00 0.00 0.00 0.00 0.00 00:13:06.376 00:13:07.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:07.751 Nvme0n1 : 6.00 9642.83 37.67 0.00 0.00 0.00 0.00 0.00 00:13:07.751 [2024-12-05T10:56:34.910Z] =================================================================================================================== 00:13:07.751 [2024-12-05T10:56:34.910Z] Total : 9642.83 37.67 0.00 0.00 0.00 0.00 0.00 00:13:07.751 00:13:08.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.683 Nvme0n1 : 7.00 9542.29 37.27 0.00 0.00 0.00 0.00 0.00 00:13:08.683 [2024-12-05T10:56:35.842Z] =================================================================================================================== 00:13:08.683 [2024-12-05T10:56:35.842Z] Total : 9542.29 37.27 0.00 0.00 0.00 0.00 0.00 00:13:08.683 00:13:09.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.620 Nvme0n1 : 8.00 9272.00 36.22 0.00 0.00 0.00 0.00 0.00 00:13:09.620 [2024-12-05T10:56:36.779Z] =================================================================================================================== 00:13:09.620 [2024-12-05T10:56:36.779Z] Total : 9272.00 36.22 0.00 0.00 0.00 0.00 0.00 00:13:09.620 00:13:10.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.556 Nvme0n1 : 9.00 9215.44 36.00 0.00 0.00 0.00 0.00 0.00 00:13:10.556 [2024-12-05T10:56:37.715Z] =================================================================================================================== 00:13:10.556 [2024-12-05T10:56:37.715Z] Total : 9215.44 36.00 0.00 0.00 0.00 0.00 0.00 00:13:10.556 00:13:11.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.493 Nvme0n1 : 10.00 9220.90 36.02 0.00 0.00 0.00 0.00 0.00 00:13:11.493 [2024-12-05T10:56:38.652Z] =================================================================================================================== 00:13:11.493 [2024-12-05T10:56:38.652Z] Total : 9220.90 36.02 0.00 0.00 0.00 0.00 0.00 00:13:11.493 00:13:11.493 00:13:11.493 Latency(us) 00:13:11.493 [2024-12-05T10:56:38.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.493 Nvme0n1 : 10.01 9216.25 36.00 0.00 0.00 13880.24 6711.52 261091.21 00:13:11.493 [2024-12-05T10:56:38.652Z] =================================================================================================================== 00:13:11.493 [2024-12-05T10:56:38.652Z] Total : 9216.25 36.00 0.00 0.00 13880.24 6711.52 261091.21 00:13:11.493 { 00:13:11.493 "results": [ 00:13:11.493 { 00:13:11.493 "job": "Nvme0n1", 00:13:11.493 "core_mask": "0x2", 00:13:11.493 "workload": "randwrite", 00:13:11.493 "status": "finished", 00:13:11.493 "queue_depth": 128, 00:13:11.493 "io_size": 4096, 00:13:11.493 "runtime": 10.005151, 00:13:11.493 "iops": 9216.252708229991, 00:13:11.493 "mibps": 36.0009871415234, 00:13:11.493 "io_failed": 0, 00:13:11.493 "io_timeout": 0, 00:13:11.493 "avg_latency_us": 13880.237307246556, 00:13:11.493 "min_latency_us": 6711.518072289156, 00:13:11.493 "max_latency_us": 261091.2128514056 00:13:11.493 } 00:13:11.493 ], 00:13:11.493 "core_count": 1 00:13:11.493 } 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63545 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63545 ']' 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63545 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63545 00:13:11.493 killing process with pid 63545 00:13:11.493 Received shutdown signal, test time was about 10.000000 seconds 00:13:11.493 00:13:11.493 Latency(us) 00:13:11.493 [2024-12-05T10:56:38.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.493 [2024-12-05T10:56:38.652Z] =================================================================================================================== 00:13:11.493 [2024-12-05T10:56:38.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63545' 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63545 00:13:11.493 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63545 00:13:11.752 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:12.010 10:56:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:12.010 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:12.010 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63198 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63198 00:13:12.269 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63198 Killed "${NVMF_APP[@]}" "$@" 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=63696 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 63696 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63696 ']' 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.269 10:56:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:12.539 [2024-12-05 10:56:39.474363] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:12.539 [2024-12-05 10:56:39.474600] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.539 [2024-12-05 10:56:39.628608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.539 [2024-12-05 10:56:39.680622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.539 [2024-12-05 10:56:39.680795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.539 [2024-12-05 10:56:39.680894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.539 [2024-12-05 10:56:39.680941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.539 [2024-12-05 10:56:39.680968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.539 [2024-12-05 10:56:39.681262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.809 [2024-12-05 10:56:39.723857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:13.376 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.376 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:13.376 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:13.376 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:13.376 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:13.376 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.376 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:13.635 [2024-12-05 10:56:40.611211] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:13.635 [2024-12-05 10:56:40.611635] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:13.635 [2024-12-05 10:56:40.611945] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:13.635 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:13.635 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev de04da46-053b-40de-8221-932ce0db61f5 00:13:13.635 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=de04da46-053b-40de-8221-932ce0db61f5 00:13:13.635 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:13.635 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:13.635 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:13.635 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:13.635 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:13.893 10:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de04da46-053b-40de-8221-932ce0db61f5 -t 2000 00:13:14.151 [ 00:13:14.151 { 00:13:14.151 "name": "de04da46-053b-40de-8221-932ce0db61f5", 00:13:14.151 "aliases": [ 00:13:14.151 "lvs/lvol" 00:13:14.151 ], 00:13:14.151 "product_name": "Logical Volume", 00:13:14.151 "block_size": 4096, 00:13:14.151 "num_blocks": 38912, 00:13:14.151 "uuid": "de04da46-053b-40de-8221-932ce0db61f5", 00:13:14.151 "assigned_rate_limits": { 00:13:14.151 "rw_ios_per_sec": 0, 00:13:14.151 "rw_mbytes_per_sec": 0, 00:13:14.151 "r_mbytes_per_sec": 0, 00:13:14.151 "w_mbytes_per_sec": 0 00:13:14.151 }, 00:13:14.151 "claimed": false, 00:13:14.151 "zoned": false, 00:13:14.151 "supported_io_types": { 00:13:14.151 "read": true, 00:13:14.151 "write": true, 00:13:14.151 "unmap": true, 00:13:14.151 "flush": false, 00:13:14.151 "reset": true, 00:13:14.151 "nvme_admin": false, 00:13:14.151 "nvme_io": false, 00:13:14.151 "nvme_io_md": false, 00:13:14.151 "write_zeroes": true, 00:13:14.151 "zcopy": false, 00:13:14.151 "get_zone_info": false, 00:13:14.151 "zone_management": false, 00:13:14.151 "zone_append": false, 00:13:14.151 "compare": false, 00:13:14.151 "compare_and_write": false, 00:13:14.151 "abort": false, 00:13:14.151 "seek_hole": true, 00:13:14.151 "seek_data": true, 00:13:14.151 "copy": false, 00:13:14.151 "nvme_iov_md": false 00:13:14.151 }, 00:13:14.151 "driver_specific": { 00:13:14.151 "lvol": { 00:13:14.151 "lvol_store_uuid": "80338e96-de14-48c7-ba3e-1c3ab873de20", 00:13:14.151 "base_bdev": "aio_bdev", 00:13:14.151 "thin_provision": false, 00:13:14.151 "num_allocated_clusters": 38, 00:13:14.151 "snapshot": false, 00:13:14.151 "clone": false, 00:13:14.151 "esnap_clone": false 00:13:14.151 } 00:13:14.151 } 00:13:14.151 } 00:13:14.151 ] 00:13:14.151 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:14.151 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:14.151 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:14.151 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:14.410 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:14.410 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:14.410 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:14.410 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:14.669 [2024-12-05 10:56:41.750902] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:14.669 10:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:14.929 request: 00:13:14.929 { 00:13:14.929 "uuid": "80338e96-de14-48c7-ba3e-1c3ab873de20", 00:13:14.929 "method": "bdev_lvol_get_lvstores", 00:13:14.929 "req_id": 1 00:13:14.929 } 00:13:14.929 Got JSON-RPC error response 00:13:14.929 response: 00:13:14.929 { 00:13:14.929 "code": -19, 00:13:14.929 "message": "No such device" 00:13:14.929 } 00:13:14.929 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:13:14.929 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:14.929 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:14.929 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:14.929 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:15.188 aio_bdev 00:13:15.188 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev de04da46-053b-40de-8221-932ce0db61f5 00:13:15.188 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=de04da46-053b-40de-8221-932ce0db61f5 00:13:15.188 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.188 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:15.188 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.188 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.188 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:15.447 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de04da46-053b-40de-8221-932ce0db61f5 -t 2000 00:13:15.706 [ 00:13:15.706 { 00:13:15.706 "name": "de04da46-053b-40de-8221-932ce0db61f5", 00:13:15.706 "aliases": [ 00:13:15.706 "lvs/lvol" 00:13:15.706 ], 00:13:15.706 "product_name": "Logical Volume", 00:13:15.706 "block_size": 4096, 00:13:15.706 "num_blocks": 38912, 00:13:15.706 "uuid": "de04da46-053b-40de-8221-932ce0db61f5", 00:13:15.706 "assigned_rate_limits": { 00:13:15.706 "rw_ios_per_sec": 0, 00:13:15.706 "rw_mbytes_per_sec": 0, 00:13:15.706 "r_mbytes_per_sec": 0, 00:13:15.706 "w_mbytes_per_sec": 0 00:13:15.706 }, 00:13:15.706 "claimed": false, 00:13:15.706 "zoned": false, 00:13:15.706 "supported_io_types": { 00:13:15.706 "read": true, 00:13:15.706 "write": true, 00:13:15.706 "unmap": true, 00:13:15.706 "flush": false, 00:13:15.706 "reset": true, 00:13:15.706 "nvme_admin": false, 00:13:15.706 "nvme_io": false, 00:13:15.706 "nvme_io_md": false, 00:13:15.706 "write_zeroes": true, 00:13:15.706 "zcopy": false, 00:13:15.706 "get_zone_info": false, 00:13:15.706 "zone_management": false, 00:13:15.706 "zone_append": false, 00:13:15.706 "compare": false, 00:13:15.706 "compare_and_write": false, 00:13:15.706 "abort": false, 00:13:15.706 "seek_hole": true, 00:13:15.706 "seek_data": true, 00:13:15.706 "copy": false, 00:13:15.706 "nvme_iov_md": false 00:13:15.706 }, 00:13:15.706 "driver_specific": { 00:13:15.706 "lvol": { 00:13:15.706 "lvol_store_uuid": "80338e96-de14-48c7-ba3e-1c3ab873de20", 00:13:15.706 "base_bdev": "aio_bdev", 00:13:15.706 "thin_provision": false, 00:13:15.706 "num_allocated_clusters": 38, 00:13:15.706 "snapshot": false, 00:13:15.706 "clone": false, 00:13:15.706 "esnap_clone": false 00:13:15.706 } 00:13:15.706 } 00:13:15.706 } 00:13:15.706 ] 00:13:15.706 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:15.706 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:15.706 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:15.706 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:15.706 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:15.706 10:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:15.966 10:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:15.966 10:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete de04da46-053b-40de-8221-932ce0db61f5 00:13:16.225 10:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 80338e96-de14-48c7-ba3e-1c3ab873de20 00:13:16.484 10:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:16.743 10:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:13:17.309 00:13:17.309 real 0m19.389s 00:13:17.309 user 0m38.847s 00:13:17.309 sys 0m8.037s 00:13:17.309 ************************************ 00:13:17.309 END TEST lvs_grow_dirty 00:13:17.309 ************************************ 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:17.309 nvmf_trace.0 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:17.309 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:17.567 rmmod nvme_tcp 00:13:17.567 rmmod nvme_fabrics 00:13:17.567 rmmod nvme_keyring 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 63696 ']' 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 63696 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63696 ']' 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63696 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63696 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.567 killing process with pid 63696 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63696' 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63696 00:13:17.567 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63696 00:13:17.826 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:17.826 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:13:17.826 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:13:17.827 10:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:13:18.086 00:13:18.086 real 0m40.147s 00:13:18.086 user 1m0.581s 00:13:18.086 sys 0m12.557s 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:18.086 ************************************ 00:13:18.086 END TEST nvmf_lvs_grow 00:13:18.086 ************************************ 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:18.086 ************************************ 00:13:18.086 START TEST nvmf_bdev_io_wait 00:13:18.086 ************************************ 00:13:18.086 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:18.086 * Looking for test storage... 00:13:18.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.356 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.357 --rc genhtml_branch_coverage=1 00:13:18.357 --rc genhtml_function_coverage=1 00:13:18.357 --rc genhtml_legend=1 00:13:18.357 --rc geninfo_all_blocks=1 00:13:18.357 --rc geninfo_unexecuted_blocks=1 00:13:18.357 00:13:18.357 ' 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.357 --rc genhtml_branch_coverage=1 00:13:18.357 --rc genhtml_function_coverage=1 00:13:18.357 --rc genhtml_legend=1 00:13:18.357 --rc geninfo_all_blocks=1 00:13:18.357 --rc geninfo_unexecuted_blocks=1 00:13:18.357 00:13:18.357 ' 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.357 --rc genhtml_branch_coverage=1 00:13:18.357 --rc genhtml_function_coverage=1 00:13:18.357 --rc genhtml_legend=1 00:13:18.357 --rc geninfo_all_blocks=1 00:13:18.357 --rc geninfo_unexecuted_blocks=1 00:13:18.357 00:13:18.357 ' 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.357 --rc genhtml_branch_coverage=1 00:13:18.357 --rc genhtml_function_coverage=1 00:13:18.357 --rc genhtml_legend=1 00:13:18.357 --rc geninfo_all_blocks=1 00:13:18.357 --rc geninfo_unexecuted_blocks=1 00:13:18.357 00:13:18.357 ' 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:18.357 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:18.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@280 -- # nvmf_veth_init 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@223 -- # create_target_ns 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # create_main_bridge 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@105 -- # delete_main_bridge 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator0 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:13:18.358 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target0 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0 up 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target0_br 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target0 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:13:18.359 10.0.0.1 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:18.359 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:18.618 10.0.0.2 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator0 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.618 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target0_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator1 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target1 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1 up 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target1_br 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target1 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:13:18.619 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772163 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:18.620 10.0.0.3 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772164 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:13:18.620 10.0.0.4 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator1 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:13:18.620 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target1_br 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:18.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:13:18.881 00:13:18.881 --- 10.0.0.1 ping statistics --- 00:13:18.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.881 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:18.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:13:18.881 00:13:18.881 --- 10.0.0.2 ping statistics --- 00:13:18.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.881 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:13:18.881 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:18.882 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:18.882 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:13:18.882 00:13:18.882 --- 10.0.0.3 ping statistics --- 00:13:18.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.882 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:13:18.882 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:18.882 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.119 ms 00:13:18.882 00:13:18.882 --- 10.0.0.4 ping statistics --- 00:13:18.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.882 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # return 0 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:18.882 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:18.883 10:56:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:18.883 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=64062 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 64062 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64062 ']' 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:19.142 10:56:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:19.142 [2024-12-05 10:56:46.122577] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:19.142 [2024-12-05 10:56:46.122639] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.142 [2024-12-05 10:56:46.279588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.401 [2024-12-05 10:56:46.328101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.401 [2024-12-05 10:56:46.328149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.401 [2024-12-05 10:56:46.328158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.401 [2024-12-05 10:56:46.328167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.401 [2024-12-05 10:56:46.328173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.401 [2024-12-05 10:56:46.329049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.401 [2024-12-05 10:56:46.329097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.401 [2024-12-05 10:56:46.329144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.401 [2024-12-05 10:56:46.329147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:19.970 [2024-12-05 10:56:47.114595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.970 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:19.970 [2024-12-05 10:56:47.125739] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:20.231 Malloc0 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:20.231 [2024-12-05 10:56:47.188753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64097 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64100 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:20.231 { 00:13:20.231 "params": { 00:13:20.231 "name": "Nvme$subsystem", 00:13:20.231 "trtype": "$TEST_TRANSPORT", 00:13:20.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:20.231 "adrfam": "ipv4", 00:13:20.231 "trsvcid": "$NVMF_PORT", 00:13:20.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:20.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:20.231 "hdgst": ${hdgst:-false}, 00:13:20.231 "ddgst": ${ddgst:-false} 00:13:20.231 }, 00:13:20.231 "method": "bdev_nvme_attach_controller" 00:13:20.231 } 00:13:20.231 EOF 00:13:20.231 )") 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64102 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:20.231 { 00:13:20.231 "params": { 00:13:20.231 "name": "Nvme$subsystem", 00:13:20.231 "trtype": "$TEST_TRANSPORT", 00:13:20.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:20.231 "adrfam": "ipv4", 00:13:20.231 "trsvcid": "$NVMF_PORT", 00:13:20.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:20.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:20.231 "hdgst": ${hdgst:-false}, 00:13:20.231 "ddgst": ${ddgst:-false} 00:13:20.231 }, 00:13:20.231 "method": "bdev_nvme_attach_controller" 00:13:20.231 } 00:13:20.231 EOF 00:13:20.231 )") 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:20.231 { 00:13:20.231 "params": { 00:13:20.231 "name": "Nvme$subsystem", 00:13:20.231 "trtype": "$TEST_TRANSPORT", 00:13:20.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:20.231 "adrfam": "ipv4", 00:13:20.231 "trsvcid": "$NVMF_PORT", 00:13:20.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:20.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:20.231 "hdgst": ${hdgst:-false}, 00:13:20.231 "ddgst": ${ddgst:-false} 00:13:20.231 }, 00:13:20.231 "method": "bdev_nvme_attach_controller" 00:13:20.231 } 00:13:20.231 EOF 00:13:20.231 )") 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64107 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:20.231 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:20.231 { 00:13:20.232 "params": { 00:13:20.232 "name": "Nvme$subsystem", 00:13:20.232 "trtype": "$TEST_TRANSPORT", 00:13:20.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:20.232 "adrfam": "ipv4", 00:13:20.232 "trsvcid": "$NVMF_PORT", 00:13:20.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:20.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:20.232 "hdgst": ${hdgst:-false}, 00:13:20.232 "ddgst": ${ddgst:-false} 00:13:20.232 }, 00:13:20.232 "method": "bdev_nvme_attach_controller" 00:13:20.232 } 00:13:20.232 EOF 00:13:20.232 )") 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:20.232 "params": { 00:13:20.232 "name": "Nvme1", 00:13:20.232 "trtype": "tcp", 00:13:20.232 "traddr": "10.0.0.2", 00:13:20.232 "adrfam": "ipv4", 00:13:20.232 "trsvcid": "4420", 00:13:20.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.232 "hdgst": false, 00:13:20.232 "ddgst": false 00:13:20.232 }, 00:13:20.232 "method": "bdev_nvme_attach_controller" 00:13:20.232 }' 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:20.232 "params": { 00:13:20.232 "name": "Nvme1", 00:13:20.232 "trtype": "tcp", 00:13:20.232 "traddr": "10.0.0.2", 00:13:20.232 "adrfam": "ipv4", 00:13:20.232 "trsvcid": "4420", 00:13:20.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.232 "hdgst": false, 00:13:20.232 "ddgst": false 00:13:20.232 }, 00:13:20.232 "method": "bdev_nvme_attach_controller" 00:13:20.232 }' 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:20.232 "params": { 00:13:20.232 "name": "Nvme1", 00:13:20.232 "trtype": "tcp", 00:13:20.232 "traddr": "10.0.0.2", 00:13:20.232 "adrfam": "ipv4", 00:13:20.232 "trsvcid": "4420", 00:13:20.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.232 "hdgst": false, 00:13:20.232 "ddgst": false 00:13:20.232 }, 00:13:20.232 "method": "bdev_nvme_attach_controller" 00:13:20.232 }' 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:20.232 "params": { 00:13:20.232 "name": "Nvme1", 00:13:20.232 "trtype": "tcp", 00:13:20.232 "traddr": "10.0.0.2", 00:13:20.232 "adrfam": "ipv4", 00:13:20.232 "trsvcid": "4420", 00:13:20.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:20.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:20.232 "hdgst": false, 00:13:20.232 "ddgst": false 00:13:20.232 }, 00:13:20.232 "method": "bdev_nvme_attach_controller" 00:13:20.232 }' 00:13:20.232 [2024-12-05 10:56:47.246830] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:20.232 [2024-12-05 10:56:47.246900] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:20.232 [2024-12-05 10:56:47.271216] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:20.232 [2024-12-05 10:56:47.271339] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:20.232 [2024-12-05 10:56:47.273000] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:20.232 [2024-12-05 10:56:47.273062] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:20.232 [2024-12-05 10:56:47.275144] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:20.232 [2024-12-05 10:56:47.275330] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:20.232 10:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64097 00:13:20.490 [2024-12-05 10:56:47.459991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.490 [2024-12-05 10:56:47.504553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:20.490 [2024-12-05 10:56:47.516375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.490 [2024-12-05 10:56:47.561688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.490 [2024-12-05 10:56:47.606002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:20.490 [2024-12-05 10:56:47.618016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.490 [2024-12-05 10:56:47.623766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.749 [2024-12-05 10:56:47.669538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:20.749 Running I/O for 1 seconds... 00:13:20.749 [2024-12-05 10:56:47.681805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.749 [2024-12-05 10:56:47.699299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.749 Running I/O for 1 seconds... 00:13:20.749 [2024-12-05 10:56:47.763094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:20.749 [2024-12-05 10:56:47.775152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:20.749 Running I/O for 1 seconds... 00:13:21.006 Running I/O for 1 seconds... 00:13:21.572 8215.00 IOPS, 32.09 MiB/s 00:13:21.572 Latency(us) 00:13:21.572 [2024-12-05T10:56:48.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.572 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:21.572 Nvme1n1 : 1.02 8231.10 32.15 0.00 0.00 15437.63 7369.51 30530.83 00:13:21.572 [2024-12-05T10:56:48.731Z] =================================================================================================================== 00:13:21.572 [2024-12-05T10:56:48.731Z] Total : 8231.10 32.15 0.00 0.00 15437.63 7369.51 30530.83 00:13:21.853 10376.00 IOPS, 40.53 MiB/s 00:13:21.853 Latency(us) 00:13:21.853 [2024-12-05T10:56:49.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.853 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:21.853 Nvme1n1 : 1.01 10416.99 40.69 0.00 0.00 12233.42 7316.87 23477.15 00:13:21.853 [2024-12-05T10:56:49.012Z] =================================================================================================================== 00:13:21.853 [2024-12-05T10:56:49.012Z] Total : 10416.99 40.69 0.00 0.00 12233.42 7316.87 23477.15 00:13:21.854 214304.00 IOPS, 837.12 MiB/s 00:13:21.854 Latency(us) 00:13:21.854 [2024-12-05T10:56:49.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.854 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:21.854 Nvme1n1 : 1.00 213919.03 835.62 0.00 0.00 595.18 322.42 1750.26 00:13:21.854 [2024-12-05T10:56:49.013Z] =================================================================================================================== 00:13:21.854 [2024-12-05T10:56:49.013Z] Total : 213919.03 835.62 0.00 0.00 595.18 322.42 1750.26 00:13:21.854 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64100 00:13:21.854 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64102 00:13:21.854 9055.00 IOPS, 35.37 MiB/s 00:13:21.854 Latency(us) 00:13:21.854 [2024-12-05T10:56:49.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.854 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:21.854 Nvme1n1 : 1.01 9186.92 35.89 0.00 0.00 13899.06 3921.63 42532.60 00:13:21.854 [2024-12-05T10:56:49.013Z] =================================================================================================================== 00:13:21.854 [2024-12-05T10:56:49.013Z] Total : 9186.92 35.89 0.00 0.00 13899.06 3921.63 42532.60 00:13:21.854 10:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64107 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:22.112 rmmod nvme_tcp 00:13:22.112 rmmod nvme_fabrics 00:13:22.112 rmmod nvme_keyring 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 64062 ']' 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 64062 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64062 ']' 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64062 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.112 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64062 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.372 killing process with pid 64062 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64062' 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64062 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64062 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:13:22.372 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:13:22.630 00:13:22.630 real 0m4.534s 00:13:22.630 user 0m17.050s 00:13:22.630 sys 0m2.747s 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:22.630 ************************************ 00:13:22.630 END TEST nvmf_bdev_io_wait 00:13:22.630 ************************************ 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:22.630 ************************************ 00:13:22.630 START TEST nvmf_queue_depth 00:13:22.630 ************************************ 00:13:22.630 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:22.890 * Looking for test storage... 00:13:22.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:22.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.890 --rc genhtml_branch_coverage=1 00:13:22.890 --rc genhtml_function_coverage=1 00:13:22.890 --rc genhtml_legend=1 00:13:22.890 --rc geninfo_all_blocks=1 00:13:22.890 --rc geninfo_unexecuted_blocks=1 00:13:22.890 00:13:22.890 ' 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:22.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.890 --rc genhtml_branch_coverage=1 00:13:22.890 --rc genhtml_function_coverage=1 00:13:22.890 --rc genhtml_legend=1 00:13:22.890 --rc geninfo_all_blocks=1 00:13:22.890 --rc geninfo_unexecuted_blocks=1 00:13:22.890 00:13:22.890 ' 00:13:22.890 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:22.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.891 --rc genhtml_branch_coverage=1 00:13:22.891 --rc genhtml_function_coverage=1 00:13:22.891 --rc genhtml_legend=1 00:13:22.891 --rc geninfo_all_blocks=1 00:13:22.891 --rc geninfo_unexecuted_blocks=1 00:13:22.891 00:13:22.891 ' 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:22.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.891 --rc genhtml_branch_coverage=1 00:13:22.891 --rc genhtml_function_coverage=1 00:13:22.891 --rc genhtml_legend=1 00:13:22.891 --rc geninfo_all_blocks=1 00:13:22.891 --rc geninfo_unexecuted_blocks=1 00:13:22.891 00:13:22.891 ' 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:22.891 10:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:22.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:22.891 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@280 -- # nvmf_veth_init 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@223 -- # create_target_ns 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # create_main_bridge 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@105 -- # delete_main_bridge 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:22.892 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator0 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target0 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0 up 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target0_br 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:13:23.152 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target0 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:13:23.153 10.0.0.1 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:23.153 10.0.0.2 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator0 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target0_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator1 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target1 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1 up 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target1_br 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target1 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:23.153 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772163 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:13:23.154 10.0.0.3 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772164 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:23.154 10.0.0.4 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator1 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:23.154 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:13:23.414 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target1_br 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:23.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:13:23.415 00:13:23.415 --- 10.0.0.1 ping statistics --- 00:13:23.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.415 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:23.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:13:23.415 00:13:23.415 --- 10.0.0.2 ping statistics --- 00:13:23.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.415 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:23.415 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:23.415 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:13:23.415 00:13:23.415 --- 10.0.0.3 ping statistics --- 00:13:23.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.415 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:13:23.415 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:13:23.416 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:23.416 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.114 ms 00:13:23.416 00:13:23.416 --- 10.0.0.4 ping statistics --- 00:13:23.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.416 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # return 0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:13:23.416 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=64392 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 64392 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64392 ']' 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.675 10:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.675 [2024-12-05 10:56:50.685948] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:23.675 [2024-12-05 10:56:50.686015] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.934 [2024-12-05 10:56:50.838752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.934 [2024-12-05 10:56:50.888016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.934 [2024-12-05 10:56:50.888067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.934 [2024-12-05 10:56:50.888077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.934 [2024-12-05 10:56:50.888085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.934 [2024-12-05 10:56:50.888092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.934 [2024-12-05 10:56:50.888384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.934 [2024-12-05 10:56:50.929116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.499 [2024-12-05 10:56:51.645889] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.499 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.758 Malloc0 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.758 [2024-12-05 10:56:51.700912] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64424 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64424 /var/tmp/bdevperf.sock 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64424 ']' 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.758 10:56:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:24.758 [2024-12-05 10:56:51.757219] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:24.758 [2024-12-05 10:56:51.757301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64424 ] 00:13:24.758 [2024-12-05 10:56:51.910867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.017 [2024-12-05 10:56:51.966934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.017 [2024-12-05 10:56:52.010114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:25.584 10:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.584 10:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:25.584 10:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:25.584 10:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.585 10:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.585 NVMe0n1 00:13:25.585 10:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.585 10:56:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:25.843 Running I/O for 10 seconds... 00:13:27.713 9216.00 IOPS, 36.00 MiB/s [2024-12-05T10:56:56.244Z] 9703.00 IOPS, 37.90 MiB/s [2024-12-05T10:56:57.177Z] 9910.33 IOPS, 38.71 MiB/s [2024-12-05T10:56:58.158Z] 10140.00 IOPS, 39.61 MiB/s [2024-12-05T10:56:59.091Z] 10228.20 IOPS, 39.95 MiB/s [2024-12-05T10:57:00.024Z] 10163.50 IOPS, 39.70 MiB/s [2024-12-05T10:57:01.033Z] 10119.43 IOPS, 39.53 MiB/s [2024-12-05T10:57:01.967Z] 10139.88 IOPS, 39.61 MiB/s [2024-12-05T10:57:02.903Z] 10158.67 IOPS, 39.68 MiB/s [2024-12-05T10:57:02.903Z] 10224.40 IOPS, 39.94 MiB/s 00:13:35.744 Latency(us) 00:13:35.744 [2024-12-05T10:57:02.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.744 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:35.744 Verification LBA range: start 0x0 length 0x4000 00:13:35.744 NVMe0n1 : 10.07 10255.15 40.06 0.00 0.00 99464.81 19687.12 70747.30 00:13:35.744 [2024-12-05T10:57:02.903Z] =================================================================================================================== 00:13:35.744 [2024-12-05T10:57:02.903Z] Total : 10255.15 40.06 0.00 0.00 99464.81 19687.12 70747.30 00:13:35.744 { 00:13:35.744 "results": [ 00:13:35.744 { 00:13:35.744 "job": "NVMe0n1", 00:13:35.744 "core_mask": "0x1", 00:13:35.744 "workload": "verify", 00:13:35.744 "status": "finished", 00:13:35.744 "verify_range": { 00:13:35.744 "start": 0, 00:13:35.744 "length": 16384 00:13:35.744 }, 00:13:35.744 "queue_depth": 1024, 00:13:35.744 "io_size": 4096, 00:13:35.744 "runtime": 10.068303, 00:13:35.744 "iops": 10255.154220130244, 00:13:35.744 "mibps": 40.059196172383764, 00:13:35.744 "io_failed": 0, 00:13:35.744 "io_timeout": 0, 00:13:35.744 "avg_latency_us": 99464.80713553473, 00:13:35.744 "min_latency_us": 19687.11967871486, 00:13:35.744 "max_latency_us": 70747.29638554217 00:13:35.744 } 00:13:35.744 ], 00:13:35.744 "core_count": 1 00:13:35.744 } 00:13:35.744 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64424 00:13:35.744 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64424 ']' 00:13:35.744 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64424 00:13:35.744 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:35.744 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.002 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64424 00:13:36.002 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.002 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.002 killing process with pid 64424 00:13:36.002 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64424' 00:13:36.002 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64424 00:13:36.002 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.002 00:13:36.002 Latency(us) 00:13:36.002 [2024-12-05T10:57:03.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.002 [2024-12-05T10:57:03.161Z] =================================================================================================================== 00:13:36.002 [2024-12-05T10:57:03.161Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.002 10:57:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64424 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:36.261 rmmod nvme_tcp 00:13:36.261 rmmod nvme_fabrics 00:13:36.261 rmmod nvme_keyring 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 64392 ']' 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 64392 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64392 ']' 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64392 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64392 00:13:36.261 killing process with pid 64392 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64392' 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64392 00:13:36.261 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64392 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:36.520 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:13:36.521 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:13:36.779 00:13:36.779 real 0m13.992s 00:13:36.779 user 0m23.112s 00:13:36.779 sys 0m2.896s 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:36.779 ************************************ 00:13:36.779 END TEST nvmf_queue_depth 00:13:36.779 ************************************ 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:36.779 ************************************ 00:13:36.779 START TEST nvmf_target_multipath 00:13:36.779 ************************************ 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:36.779 * Looking for test storage... 00:13:36.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:13:36.779 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.040 10:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:37.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.040 --rc genhtml_branch_coverage=1 00:13:37.040 --rc genhtml_function_coverage=1 00:13:37.040 --rc genhtml_legend=1 00:13:37.040 --rc geninfo_all_blocks=1 00:13:37.040 --rc geninfo_unexecuted_blocks=1 00:13:37.040 00:13:37.040 ' 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:37.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.040 --rc genhtml_branch_coverage=1 00:13:37.040 --rc genhtml_function_coverage=1 00:13:37.040 --rc genhtml_legend=1 00:13:37.040 --rc geninfo_all_blocks=1 00:13:37.040 --rc geninfo_unexecuted_blocks=1 00:13:37.040 00:13:37.040 ' 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:37.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.040 --rc genhtml_branch_coverage=1 00:13:37.040 --rc genhtml_function_coverage=1 00:13:37.040 --rc genhtml_legend=1 00:13:37.040 --rc geninfo_all_blocks=1 00:13:37.040 --rc geninfo_unexecuted_blocks=1 00:13:37.040 00:13:37.040 ' 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:37.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.040 --rc genhtml_branch_coverage=1 00:13:37.040 --rc genhtml_function_coverage=1 00:13:37.040 --rc genhtml_legend=1 00:13:37.040 --rc geninfo_all_blocks=1 00:13:37.040 --rc geninfo_unexecuted_blocks=1 00:13:37.040 00:13:37.040 ' 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:13:37.040 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:37.041 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:13:37.041 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:37.042 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:37.301 10.0.0.1 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:37.301 10.0.0.2 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:13:37.301 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:13:37.302 10.0.0.3 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:37.302 10.0.0.4 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:13:37.302 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:37.562 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:37.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:13:37.563 00:13:37.563 --- 10.0.0.1 ping statistics --- 00:13:37.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.563 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:37.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:13:37.563 00:13:37.563 --- 10.0.0.2 ping statistics --- 00:13:37.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.563 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:37.563 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:37.563 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:13:37.563 00:13:37.563 --- 10.0.0.3 ping statistics --- 00:13:37.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.563 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:13:37.563 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:37.563 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.106 ms 00:13:37.563 00:13:37.563 --- 10.0.0.4 ping statistics --- 00:13:37.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.563 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # return 0 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:37.563 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:37.564 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # nvmfpid=64800 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # waitforlisten 64800 00:13:37.823 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 64800 ']' 00:13:37.824 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.824 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.824 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.824 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.824 10:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:37.824 [2024-12-05 10:57:04.841724] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:37.824 [2024-12-05 10:57:04.841837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.083 [2024-12-05 10:57:04.997470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.083 [2024-12-05 10:57:05.048951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.083 [2024-12-05 10:57:05.049004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.083 [2024-12-05 10:57:05.049014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.083 [2024-12-05 10:57:05.049023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.083 [2024-12-05 10:57:05.049030] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.083 [2024-12-05 10:57:05.049986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.083 [2024-12-05 10:57:05.050207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.083 [2024-12-05 10:57:05.050342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.083 [2024-12-05 10:57:05.050344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.083 [2024-12-05 10:57:05.092409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.649 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.650 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:13:38.650 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:38.650 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:38.650 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:38.650 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.650 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:38.908 [2024-12-05 10:57:05.975571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.908 10:57:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:13:39.166 Malloc0 00:13:39.166 10:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:13:39.424 10:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:39.683 10:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.683 [2024-12-05 10:57:06.832602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.942 10:57:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:13:39.942 [2024-12-05 10:57:07.028573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:13:39.942 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:13:40.201 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:13:40.201 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.201 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:13:40.201 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.201 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:40.201 10:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:42.736 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64890 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:13:42.737 10:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:42.737 [global] 00:13:42.737 thread=1 00:13:42.737 invalidate=1 00:13:42.737 rw=randrw 00:13:42.737 time_based=1 00:13:42.737 runtime=6 00:13:42.737 ioengine=libaio 00:13:42.737 direct=1 00:13:42.737 bs=4096 00:13:42.737 iodepth=128 00:13:42.737 norandommap=0 00:13:42.737 numjobs=1 00:13:42.737 00:13:42.737 verify_dump=1 00:13:42.737 verify_backlog=512 00:13:42.737 verify_state_save=0 00:13:42.737 do_verify=1 00:13:42.737 verify=crc32c-intel 00:13:42.737 [job0] 00:13:42.737 filename=/dev/nvme0n1 00:13:42.737 Could not set queue depth (nvme0n1) 00:13:42.737 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:42.737 fio-3.35 00:13:42.737 Starting 1 thread 00:13:43.304 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:43.563 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:43.822 10:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:44.081 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:44.340 10:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64890 00:13:49.609 00:13:49.609 job0: (groupid=0, jobs=1): err= 0: pid=64911: Thu Dec 5 10:57:15 2024 00:13:49.609 read: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(310MiB/6005msec) 00:13:49.609 slat (usec): min=5, max=10107, avg=41.96, stdev=157.09 00:13:49.609 clat (usec): min=1264, max=26462, avg=6657.43, stdev=1432.86 00:13:49.609 lat (usec): min=1284, max=26484, avg=6699.39, stdev=1438.53 00:13:49.609 clat percentiles (usec): 00:13:49.609 | 1.00th=[ 3818], 5.00th=[ 4686], 10.00th=[ 5473], 20.00th=[ 5997], 00:13:49.609 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:13:49.609 | 70.00th=[ 6849], 80.00th=[ 7111], 90.00th=[ 7767], 95.00th=[ 9503], 00:13:49.609 | 99.00th=[10814], 99.50th=[11994], 99.90th=[23725], 99.95th=[25297], 00:13:49.609 | 99.99th=[26346] 00:13:49.609 bw ( KiB/s): min= 9984, max=35088, per=52.19%, avg=27585.45, stdev=7172.74, samples=11 00:13:49.609 iops : min= 2496, max= 8772, avg=6896.36, stdev=1793.19, samples=11 00:13:49.609 write: IOPS=7768, BW=30.3MiB/s (31.8MB/s)(158MiB/5209msec); 0 zone resets 00:13:49.609 slat (usec): min=11, max=3929, avg=53.37, stdev=102.00 00:13:49.609 clat (usec): min=787, max=25076, avg=5746.52, stdev=1185.11 00:13:49.609 lat (usec): min=843, max=25157, avg=5799.89, stdev=1189.91 00:13:49.609 clat percentiles (usec): 00:13:49.609 | 1.00th=[ 3163], 5.00th=[ 3916], 10.00th=[ 4293], 20.00th=[ 5014], 00:13:49.609 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5800], 60.00th=[ 5997], 00:13:49.609 | 70.00th=[ 6128], 80.00th=[ 6325], 90.00th=[ 6652], 95.00th=[ 7111], 00:13:49.609 | 99.00th=[ 9765], 99.50th=[10814], 99.90th=[14615], 99.95th=[16712], 00:13:49.609 | 99.99th=[22938] 00:13:49.609 bw ( KiB/s): min=10288, max=34576, per=88.64%, avg=27546.91, stdev=6905.10, samples=11 00:13:49.609 iops : min= 2572, max= 8644, avg=6886.73, stdev=1726.28, samples=11 00:13:49.609 lat (usec) : 1000=0.01% 00:13:49.609 lat (msec) : 2=0.14%, 4=2.88%, 10=94.78%, 20=2.08%, 50=0.11% 00:13:49.609 cpu : usr=7.51%, sys=30.16%, ctx=7643, majf=0, minf=139 00:13:49.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:49.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:49.609 issued rwts: total=79355,40468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:49.609 00:13:49.609 Run status group 0 (all jobs): 00:13:49.609 READ: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=310MiB (325MB), run=6005-6005msec 00:13:49.609 WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=158MiB (166MB), run=5209-5209msec 00:13:49.609 00:13:49.609 Disk stats (read/write): 00:13:49.609 nvme0n1: ios=78407/39659, merge=0/0, ticks=480148/199205, in_queue=679353, util=98.67% 00:13:49.609 10:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:13:49.609 10:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64991 00:13:49.609 10:57:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:13:49.609 [global] 00:13:49.609 thread=1 00:13:49.609 invalidate=1 00:13:49.609 rw=randrw 00:13:49.609 time_based=1 00:13:49.609 runtime=6 00:13:49.609 ioengine=libaio 00:13:49.609 direct=1 00:13:49.609 bs=4096 00:13:49.609 iodepth=128 00:13:49.609 norandommap=0 00:13:49.609 numjobs=1 00:13:49.609 00:13:49.609 verify_dump=1 00:13:49.609 verify_backlog=512 00:13:49.609 verify_state_save=0 00:13:49.609 do_verify=1 00:13:49.609 verify=crc32c-intel 00:13:49.609 [job0] 00:13:49.609 filename=/dev/nvme0n1 00:13:49.609 Could not set queue depth (nvme0n1) 00:13:49.609 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:49.609 fio-3.35 00:13:49.609 Starting 1 thread 00:13:50.176 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:50.435 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:50.694 10:57:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:50.954 10:57:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64991 00:13:56.226 00:13:56.226 job0: (groupid=0, jobs=1): err= 0: pid=65017: Thu Dec 5 10:57:22 2024 00:13:56.226 read: IOPS=14.0k, BW=54.7MiB/s (57.4MB/s)(329MiB/6005msec) 00:13:56.226 slat (usec): min=2, max=4647, avg=34.25, stdev=128.01 00:13:56.226 clat (usec): min=248, max=14207, avg=6302.07, stdev=1432.80 00:13:56.226 lat (usec): min=257, max=14224, avg=6336.32, stdev=1441.70 00:13:56.226 clat percentiles (usec): 00:13:56.226 | 1.00th=[ 2933], 5.00th=[ 4015], 10.00th=[ 4490], 20.00th=[ 5276], 00:13:56.226 | 30.00th=[ 5866], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6587], 00:13:56.226 | 70.00th=[ 6783], 80.00th=[ 7046], 90.00th=[ 7570], 95.00th=[ 9241], 00:13:56.226 | 99.00th=[10683], 99.50th=[11207], 99.90th=[12780], 99.95th=[12911], 00:13:56.226 | 99.99th=[13304] 00:13:56.226 bw ( KiB/s): min=13224, max=43792, per=51.37%, avg=28779.64, stdev=9615.22, samples=11 00:13:56.226 iops : min= 3306, max=10948, avg=7194.91, stdev=2403.81, samples=11 00:13:56.226 write: IOPS=8328, BW=32.5MiB/s (34.1MB/s)(169MiB/5186msec); 0 zone resets 00:13:56.226 slat (usec): min=3, max=5370, avg=48.54, stdev=86.11 00:13:56.226 clat (usec): min=268, max=13091, avg=5357.58, stdev=1367.56 00:13:56.226 lat (usec): min=323, max=13131, avg=5406.12, stdev=1375.77 00:13:56.226 clat percentiles (usec): 00:13:56.226 | 1.00th=[ 2409], 5.00th=[ 3195], 10.00th=[ 3589], 20.00th=[ 4113], 00:13:56.226 | 30.00th=[ 4621], 40.00th=[ 5211], 50.00th=[ 5538], 60.00th=[ 5800], 00:13:56.226 | 70.00th=[ 6063], 80.00th=[ 6259], 90.00th=[ 6652], 95.00th=[ 7242], 00:13:56.226 | 99.00th=[ 9503], 99.50th=[10683], 99.90th=[11994], 99.95th=[12256], 00:13:56.226 | 99.99th=[12387] 00:13:56.226 bw ( KiB/s): min=13832, max=44055, per=86.44%, avg=28796.27, stdev=9287.77, samples=11 00:13:56.226 iops : min= 3458, max=11013, avg=7199.00, stdev=2321.82, samples=11 00:13:56.226 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:13:56.226 lat (msec) : 2=0.36%, 4=8.81%, 10=88.93%, 20=1.83% 00:13:56.226 cpu : usr=7.30%, sys=33.21%, ctx=8460, majf=0, minf=139 00:13:56.226 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:56.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:56.226 issued rwts: total=84100,43191,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.226 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:56.226 00:13:56.226 Run status group 0 (all jobs): 00:13:56.226 READ: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=329MiB (344MB), run=6005-6005msec 00:13:56.226 WRITE: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=169MiB (177MB), run=5186-5186msec 00:13:56.226 00:13:56.226 Disk stats (read/write): 00:13:56.226 nvme0n1: ios=83308/42485, merge=0/0, ticks=476366/193430, in_queue=669796, util=98.53% 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:56.226 10:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:56.226 rmmod nvme_tcp 00:13:56.226 rmmod nvme_fabrics 00:13:56.226 rmmod nvme_keyring 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n 64800 ']' 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@337 -- # killprocess 64800 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 64800 ']' 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 64800 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64800 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.226 killing process with pid 64800 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64800' 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 64800 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 64800 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:13:56.226 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:13:56.488 00:13:56.488 real 0m19.729s 00:13:56.488 user 1m9.978s 00:13:56.488 sys 0m12.761s 00:13:56.488 ************************************ 00:13:56.488 END TEST nvmf_target_multipath 00:13:56.488 ************************************ 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:56.488 ************************************ 00:13:56.488 START TEST nvmf_zcopy 00:13:56.488 ************************************ 00:13:56.488 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:56.752 * Looking for test storage... 00:13:56.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:56.752 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:56.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.753 --rc genhtml_branch_coverage=1 00:13:56.753 --rc genhtml_function_coverage=1 00:13:56.753 --rc genhtml_legend=1 00:13:56.753 --rc geninfo_all_blocks=1 00:13:56.753 --rc geninfo_unexecuted_blocks=1 00:13:56.753 00:13:56.753 ' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:56.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.753 --rc genhtml_branch_coverage=1 00:13:56.753 --rc genhtml_function_coverage=1 00:13:56.753 --rc genhtml_legend=1 00:13:56.753 --rc geninfo_all_blocks=1 00:13:56.753 --rc geninfo_unexecuted_blocks=1 00:13:56.753 00:13:56.753 ' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:56.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.753 --rc genhtml_branch_coverage=1 00:13:56.753 --rc genhtml_function_coverage=1 00:13:56.753 --rc genhtml_legend=1 00:13:56.753 --rc geninfo_all_blocks=1 00:13:56.753 --rc geninfo_unexecuted_blocks=1 00:13:56.753 00:13:56.753 ' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:56.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:56.753 --rc genhtml_branch_coverage=1 00:13:56.753 --rc genhtml_function_coverage=1 00:13:56.753 --rc genhtml_legend=1 00:13:56.753 --rc geninfo_all_blocks=1 00:13:56.753 --rc geninfo_unexecuted_blocks=1 00:13:56.753 00:13:56.753 ' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:56.753 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@280 -- # nvmf_veth_init 00:13:56.753 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@223 -- # create_target_ns 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # create_main_bridge 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@105 -- # delete_main_bridge 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator0 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:13:56.754 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:13:57.019 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:13:57.019 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:13:57.019 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:13:57.019 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target0 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0 up 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target0_br 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target0 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:13:57.020 10.0.0.1 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:13:57.020 10.0.0.2 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator0 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:13:57.020 10:57:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target0_br 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:13:57.020 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator1 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target1 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1 up 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target1 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772163 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:13:57.021 10.0.0.3 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772164 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:13:57.021 10.0.0.4 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator1 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:13:57.021 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target1_br 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 2 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:13:57.289 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:57.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:13:57.290 00:13:57.290 --- 10.0.0.1 ping statistics --- 00:13:57.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.290 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:57.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:13:57.290 00:13:57.290 --- 10.0.0.2 ping statistics --- 00:13:57.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.290 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:13:57.290 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:57.290 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:13:57.290 00:13:57.290 --- 10.0.0.3 ping statistics --- 00:13:57.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.290 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:13:57.290 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:57.290 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.127 ms 00:13:57.290 00:13:57.290 --- 10.0.0.4 ping statistics --- 00:13:57.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.290 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # return 0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:57.290 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:13:57.291 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:13:57.559 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:13:57.559 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:13:57.559 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:13:57.559 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:57.559 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=65320 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 65320 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65320 ']' 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.560 10:57:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.560 [2024-12-05 10:57:24.556725] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:57.560 [2024-12-05 10:57:24.556964] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.560 [2024-12-05 10:57:24.710599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.831 [2024-12-05 10:57:24.761925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.831 [2024-12-05 10:57:24.762165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.831 [2024-12-05 10:57:24.762263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.831 [2024-12-05 10:57:24.762335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.831 [2024-12-05 10:57:24.762364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.831 [2024-12-05 10:57:24.762691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.831 [2024-12-05 10:57:24.804678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.426 [2024-12-05 10:57:25.503228] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.426 [2024-12-05 10:57:25.527303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.426 malloc0 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:58.426 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.427 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:58.427 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:58.427 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:13:58.427 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:13:58.427 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:58.427 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:58.427 { 00:13:58.427 "params": { 00:13:58.427 "name": "Nvme$subsystem", 00:13:58.427 "trtype": "$TEST_TRANSPORT", 00:13:58.427 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:58.427 "adrfam": "ipv4", 00:13:58.427 "trsvcid": "$NVMF_PORT", 00:13:58.427 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:58.427 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:58.427 "hdgst": ${hdgst:-false}, 00:13:58.427 "ddgst": ${ddgst:-false} 00:13:58.427 }, 00:13:58.427 "method": "bdev_nvme_attach_controller" 00:13:58.427 } 00:13:58.427 EOF 00:13:58.427 )") 00:13:58.427 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:13:58.771 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:13:58.771 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:13:58.771 10:57:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:58.771 "params": { 00:13:58.771 "name": "Nvme1", 00:13:58.771 "trtype": "tcp", 00:13:58.771 "traddr": "10.0.0.2", 00:13:58.771 "adrfam": "ipv4", 00:13:58.771 "trsvcid": "4420", 00:13:58.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:58.771 "hdgst": false, 00:13:58.771 "ddgst": false 00:13:58.771 }, 00:13:58.771 "method": "bdev_nvme_attach_controller" 00:13:58.771 }' 00:13:58.771 [2024-12-05 10:57:25.626685] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:13:58.772 [2024-12-05 10:57:25.626772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65353 ] 00:13:58.772 [2024-12-05 10:57:25.780251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.772 [2024-12-05 10:57:25.833340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.772 [2024-12-05 10:57:25.886878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:59.033 Running I/O for 10 seconds... 00:14:00.902 8098.00 IOPS, 63.27 MiB/s [2024-12-05T10:57:29.021Z] 8123.50 IOPS, 63.46 MiB/s [2024-12-05T10:57:30.396Z] 8139.00 IOPS, 63.59 MiB/s [2024-12-05T10:57:31.015Z] 8113.75 IOPS, 63.39 MiB/s [2024-12-05T10:57:32.405Z] 8049.00 IOPS, 62.88 MiB/s [2024-12-05T10:57:33.340Z] 8068.17 IOPS, 63.03 MiB/s [2024-12-05T10:57:34.274Z] 8076.00 IOPS, 63.09 MiB/s [2024-12-05T10:57:35.211Z] 8079.50 IOPS, 63.12 MiB/s [2024-12-05T10:57:36.144Z] 8042.22 IOPS, 62.83 MiB/s [2024-12-05T10:57:36.144Z] 7953.50 IOPS, 62.14 MiB/s 00:14:08.985 Latency(us) 00:14:08.985 [2024-12-05T10:57:36.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.985 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:08.985 Verification LBA range: start 0x0 length 0x1000 00:14:08.985 Nvme1n1 : 10.01 7954.84 62.15 0.00 0.00 16045.11 2342.45 28846.37 00:14:08.985 [2024-12-05T10:57:36.144Z] =================================================================================================================== 00:14:08.985 [2024-12-05T10:57:36.144Z] Total : 7954.84 62.15 0.00 0.00 16045.11 2342.45 28846.37 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65476 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:09.244 { 00:14:09.244 "params": { 00:14:09.244 "name": "Nvme$subsystem", 00:14:09.244 "trtype": "$TEST_TRANSPORT", 00:14:09.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:09.244 "adrfam": "ipv4", 00:14:09.244 "trsvcid": "$NVMF_PORT", 00:14:09.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:09.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:09.244 "hdgst": ${hdgst:-false}, 00:14:09.244 "ddgst": ${ddgst:-false} 00:14:09.244 }, 00:14:09.244 "method": "bdev_nvme_attach_controller" 00:14:09.244 } 00:14:09.244 EOF 00:14:09.244 )") 00:14:09.244 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:14:09.245 [2024-12-05 10:57:36.207377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.207420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:14:09.245 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:14:09.245 10:57:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:09.245 "params": { 00:14:09.245 "name": "Nvme1", 00:14:09.245 "trtype": "tcp", 00:14:09.245 "traddr": "10.0.0.2", 00:14:09.245 "adrfam": "ipv4", 00:14:09.245 "trsvcid": "4420", 00:14:09.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:09.245 "hdgst": false, 00:14:09.245 "ddgst": false 00:14:09.245 }, 00:14:09.245 "method": "bdev_nvme_attach_controller" 00:14:09.245 }' 00:14:09.245 [2024-12-05 10:57:36.219321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.219352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.235293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.235334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.251250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.251285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.255046] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:14:09.245 [2024-12-05 10:57:36.255140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65476 ] 00:14:09.245 [2024-12-05 10:57:36.267271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.267323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.283234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.283268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.299236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.299284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.311228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.311255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.323198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.323239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.335183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.335219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.347156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.347187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.359145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.359205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.371122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.371172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.383108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.383142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.245 [2024-12-05 10:57:36.399091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.245 [2024-12-05 10:57:36.399131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.410237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.504 [2024-12-05 10:57:36.411067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.411092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.423049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.423087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.435035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.435080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.447017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.447050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.458997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.459024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.468181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.504 [2024-12-05 10:57:36.470979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.471005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.482972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.483008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.494951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.494981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.506933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.506964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.518916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.518945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.520684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:09.504 [2024-12-05 10:57:36.530905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.530956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.542882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.542911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.554865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.554891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.566887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.566931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.578875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.578914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.590867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.590909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.602858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.602900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.614846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.614886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.626843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.626890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 Running I/O for 5 seconds... 00:14:09.504 [2024-12-05 10:57:36.638821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.638853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.504 [2024-12-05 10:57:36.658483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.504 [2024-12-05 10:57:36.658541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.674104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.674156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.690468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.690523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.706764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.706822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.726482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.726530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.742089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.742144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.762856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.762914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.785536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.785619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.800329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.800394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.816171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.816242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.832244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.832324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.842415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.842477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.856686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.856770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.866597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.866665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.885677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.885756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.763 [2024-12-05 10:57:36.915346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.763 [2024-12-05 10:57:36.915436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:36.931161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:36.931216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:36.946271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:36.946335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:36.962607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:36.962659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:36.974073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:36.974131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:36.989137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:36.989192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.008827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.008886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.025220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.025282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.040840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.040893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.055548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.055604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.071748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.071803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.086669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.086719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.106254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.106320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.121679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.121732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.137836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.137892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.149388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.149445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.167396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.167446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.022 [2024-12-05 10:57:37.181134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.022 [2024-12-05 10:57:37.181183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.330 [2024-12-05 10:57:37.196252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.330 [2024-12-05 10:57:37.196312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.330 [2024-12-05 10:57:37.211435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.330 [2024-12-05 10:57:37.211482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.330 [2024-12-05 10:57:37.227818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.330 [2024-12-05 10:57:37.227862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.330 [2024-12-05 10:57:37.247497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.330 [2024-12-05 10:57:37.247549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.330 [2024-12-05 10:57:37.263141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.330 [2024-12-05 10:57:37.263191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.330 [2024-12-05 10:57:37.279011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.330 [2024-12-05 10:57:37.279066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.330 [2024-12-05 10:57:37.298862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.330 [2024-12-05 10:57:37.298911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.330 [2024-12-05 10:57:37.314130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.314174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.331 [2024-12-05 10:57:37.330139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.330180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.331 [2024-12-05 10:57:37.350154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.350195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.331 [2024-12-05 10:57:37.369023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.369064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.331 [2024-12-05 10:57:37.383521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.383560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.331 [2024-12-05 10:57:37.399787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.399826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.331 [2024-12-05 10:57:37.411468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.411502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.331 [2024-12-05 10:57:37.426264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.426323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.331 [2024-12-05 10:57:37.443270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.331 [2024-12-05 10:57:37.443321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.459547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.459589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.475558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.475602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.493741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.493788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.509691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.509736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.528946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.529005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.547629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.547685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.563941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.563993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.580328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.580376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.598905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.598948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.610726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.610765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.626491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.626533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 13911.00 IOPS, 108.68 MiB/s [2024-12-05T10:57:37.751Z] [2024-12-05 10:57:37.642512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.642551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.653995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.654029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.668984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.669018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.685185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.685220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.701581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.701613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.717496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.717532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.732372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.732406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.592 [2024-12-05 10:57:37.747735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.592 [2024-12-05 10:57:37.747773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.762786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.762823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.779001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.779039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.793721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.793756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.804971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.805004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.820454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.820487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.838656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.838693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.854419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.854453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.870743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.870780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.882085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.882121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.897866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.897907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.913374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.913426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.924829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.924880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.939113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.939171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.952856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.952908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.967762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.967821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:37.986953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:37.987012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.852 [2024-12-05 10:57:38.005962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.852 [2024-12-05 10:57:38.006051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.024504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.024557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.042559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.042623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.061053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.061109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.079445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.079504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.097519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.097582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.115930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.115987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.134840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.134906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.152759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.152811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.171668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.171735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.190930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.190987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.210744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.210798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.228615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.228679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.247374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.247426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.112 [2024-12-05 10:57:38.264530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.112 [2024-12-05 10:57:38.264583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.284407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.284463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.302814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.302871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.321018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.321068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.339031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.339079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.357239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.357297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.375445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.375489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.394022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.394075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.412461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.412505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.430184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.430235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.448919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.448962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.467695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.467734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.485773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.485809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.504397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.504436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.372 [2024-12-05 10:57:38.522818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.372 [2024-12-05 10:57:38.522876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.540684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.540739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.558833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.558891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.577022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.577076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.595700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.595752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.613799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.613857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.631573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.631625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 14504.50 IOPS, 113.32 MiB/s [2024-12-05T10:57:38.790Z] [2024-12-05 10:57:38.662574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.662629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.680167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.680219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.698608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.698663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.716662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.716714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.734765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.734814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.753142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.753190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.771331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.771381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.631 [2024-12-05 10:57:38.789384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.631 [2024-12-05 10:57:38.789431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.810745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.810785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.829780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.829821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.846594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.846629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.865637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.865670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.883621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.883660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.902669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.902719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.918801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.918837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.929835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.929869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.945077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.945114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.961040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.961079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.977780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.977815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:38.993091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:38.993124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:39.008516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:39.008548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:39.027626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:39.027658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.892 [2024-12-05 10:57:39.042847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.892 [2024-12-05 10:57:39.042879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.062900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.062933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.079565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.079596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.099218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.099252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.117026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.117058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.132454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.132485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.148605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.148636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.166834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.166867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.185024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.185057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.201460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.201491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.217466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.217499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.232111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.232157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.248738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.248771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.264627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.264658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.283051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.283083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.152 [2024-12-05 10:57:39.302799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.152 [2024-12-05 10:57:39.302832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.321572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.321605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.339986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.340019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.358430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.358461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.374496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.374533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.392554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.392589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.408962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.409000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.425068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.425106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.441444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.441481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.461262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.461311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.479719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.479758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.495262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.495305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.514694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.514726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.532525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.532560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.550084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.550116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.412 [2024-12-05 10:57:39.568305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.412 [2024-12-05 10:57:39.568336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.586799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.586836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.605366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.605403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.621710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.621746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 14774.33 IOPS, 115.42 MiB/s [2024-12-05T10:57:39.831Z] [2024-12-05 10:57:39.638222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.638257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.657454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.657491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.675349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.675385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.693289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.693324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.711383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.711415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.729096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.729130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.747300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.747333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.762488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.762519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.782195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.782227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.800070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.800103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.672 [2024-12-05 10:57:39.818955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.672 [2024-12-05 10:57:39.818995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.837054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.837104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.855246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.855301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.873345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.873390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.891570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.891622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.909292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.909337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.927458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.927503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.945478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.945521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.964138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.964187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:39.982538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:39.982585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:40.005399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:40.005439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:40.019857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:40.019900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:40.036413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:40.036452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:40.048245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:40.048292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:40.064051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:40.064094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.931 [2024-12-05 10:57:40.083566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.931 [2024-12-05 10:57:40.083609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.099962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.099998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.116600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.116635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.133446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.133499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.153223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.153260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.171176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.171211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.189216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.189250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.207340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.207371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.225700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.225733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.243924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.243958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.262719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.262753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.281075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.281108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.299230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.299265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.317675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.317708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.190 [2024-12-05 10:57:40.333538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.190 [2024-12-05 10:57:40.333571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.351348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.351383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.369418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.369452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.387661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.387699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.406344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.406381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.422597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.422633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.439676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.439711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.458946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.458984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.477372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.477413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.493938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.493982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.510834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.510870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.530406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.530444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.548611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.548650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.566785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.566827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.585007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.585048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.449 [2024-12-05 10:57:40.603602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.449 [2024-12-05 10:57:40.603638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.621024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.621065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 14918.25 IOPS, 116.55 MiB/s [2024-12-05T10:57:40.867Z] [2024-12-05 10:57:40.635738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.635771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.655365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.655398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.673474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.673509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.691004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.691039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.709695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.709731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.726427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.726461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.746730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.746762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.761532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.708 [2024-12-05 10:57:40.761564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.708 [2024-12-05 10:57:40.779413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.709 [2024-12-05 10:57:40.779443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.709 [2024-12-05 10:57:40.797726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.709 [2024-12-05 10:57:40.797759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.709 [2024-12-05 10:57:40.815625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.709 [2024-12-05 10:57:40.815657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.709 [2024-12-05 10:57:40.833409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.709 [2024-12-05 10:57:40.833440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.709 [2024-12-05 10:57:40.851882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.709 [2024-12-05 10:57:40.851914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.709 [2024-12-05 10:57:40.866756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.709 [2024-12-05 10:57:40.866788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:40.882725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:40.882757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:40.894175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:40.894205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:40.909733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:40.909769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:40.929534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:40.929569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:40.948151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:40.948184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:40.966659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:40.966691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:40.985113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:40.985146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:41.003367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:41.003398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:41.021252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:41.021293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:41.039240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:41.039282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:41.057305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:41.057337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:41.078858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:41.078890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:41.094441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:41.094471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.968 [2024-12-05 10:57:41.113617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.968 [2024-12-05 10:57:41.113653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.128081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.128113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.145107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.145140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.161319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.161351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.177223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.177259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.195536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.195568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.213616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.213650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.231513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.231547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.249484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.249517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.267690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.267723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.285965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.286007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.303912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.303946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.321729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.321763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.340160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.340194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.358087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.358119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.226 [2024-12-05 10:57:41.376123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.226 [2024-12-05 10:57:41.376157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.393914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.393958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.411420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.411461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.429401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.429448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.447577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.447626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.465538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.465586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.483428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.483476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.502120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.502172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.520071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.520121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.538640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.538688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.556743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.556794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.575478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.575526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.593689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.593737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.611589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.611626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 [2024-12-05 10:57:41.630150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.630186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.486 15042.60 IOPS, 117.52 MiB/s 00:14:14.486 Latency(us) 00:14:14.486 [2024-12-05T10:57:41.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.486 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:14.486 Nvme1n1 : 5.01 15044.55 117.54 0.00 0.00 8499.87 2895.16 30530.83 00:14:14.486 [2024-12-05T10:57:41.645Z] =================================================================================================================== 00:14:14.486 [2024-12-05T10:57:41.645Z] Total : 15044.55 117.54 0.00 0.00 8499.87 2895.16 30530.83 00:14:14.486 [2024-12-05 10:57:41.644281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.486 [2024-12-05 10:57:41.644312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.660239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.660264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.676208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.676228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.692191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.692212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.708168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.708188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.724145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.724165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.740121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.740139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.756099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.756117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.772078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.772101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.788056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.788078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.804032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.804051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.820010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.820029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.835989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.836007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.851968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.851986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.867944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.867963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.883925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.883946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.746 [2024-12-05 10:57:41.899901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.746 [2024-12-05 10:57:41.899920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.005 [2024-12-05 10:57:41.915880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.005 [2024-12-05 10:57:41.915904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.005 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65476) - No such process 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65476 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:15.005 delay0 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.005 10:57:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:15.005 [2024-12-05 10:57:42.150587] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:21.567 Initializing NVMe Controllers 00:14:21.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:21.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:21.567 Initialization complete. Launching workers. 00:14:21.567 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 329 00:14:21.567 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 616, failed to submit 33 00:14:21.567 success 507, unsuccessful 109, failed 0 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:21.567 rmmod nvme_tcp 00:14:21.567 rmmod nvme_fabrics 00:14:21.567 rmmod nvme_keyring 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 65320 ']' 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 65320 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65320 ']' 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65320 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65320 00:14:21.567 killing process with pid 65320 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65320' 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65320 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65320 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:21.567 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:14:21.568 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:14:21.833 00:14:21.833 real 0m25.200s 00:14:21.833 user 0m39.719s 00:14:21.833 sys 0m8.979s 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.833 ************************************ 00:14:21.833 END TEST nvmf_zcopy 00:14:21.833 ************************************ 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.833 10:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:21.833 ************************************ 00:14:21.834 START TEST nvmf_nmic 00:14:21.834 ************************************ 00:14:21.834 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:21.834 * Looking for test storage... 00:14:22.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:22.093 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:22.093 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:14:22.093 10:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:22.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.093 --rc genhtml_branch_coverage=1 00:14:22.093 --rc genhtml_function_coverage=1 00:14:22.093 --rc genhtml_legend=1 00:14:22.093 --rc geninfo_all_blocks=1 00:14:22.093 --rc geninfo_unexecuted_blocks=1 00:14:22.093 00:14:22.093 ' 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:22.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.093 --rc genhtml_branch_coverage=1 00:14:22.093 --rc genhtml_function_coverage=1 00:14:22.093 --rc genhtml_legend=1 00:14:22.093 --rc geninfo_all_blocks=1 00:14:22.093 --rc geninfo_unexecuted_blocks=1 00:14:22.093 00:14:22.093 ' 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:22.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.093 --rc genhtml_branch_coverage=1 00:14:22.093 --rc genhtml_function_coverage=1 00:14:22.093 --rc genhtml_legend=1 00:14:22.093 --rc geninfo_all_blocks=1 00:14:22.093 --rc geninfo_unexecuted_blocks=1 00:14:22.093 00:14:22.093 ' 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:22.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.093 --rc genhtml_branch_coverage=1 00:14:22.093 --rc genhtml_function_coverage=1 00:14:22.093 --rc genhtml_legend=1 00:14:22.093 --rc geninfo_all_blocks=1 00:14:22.093 --rc geninfo_unexecuted_blocks=1 00:14:22.093 00:14:22.093 ' 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.093 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:22.094 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@280 -- # nvmf_veth_init 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@223 -- # create_target_ns 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # create_main_bridge 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@105 -- # delete_main_bridge 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator0 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target0 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:14:22.094 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0 up 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target0_br 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target0 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:14:22.352 10.0.0.1 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:14:22.352 10.0.0.2 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator0 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:14:22.352 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target0_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator1 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target1 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1 up 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target1_br 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target1 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:14:22.353 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772163 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:14:22.611 10.0.0.3 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772164 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:14:22.611 10.0.0.4 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator1 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target1_br 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 2 00:14:22.611 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:22.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:14:22.612 00:14:22.612 --- 10.0.0.1 ping statistics --- 00:14:22.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.612 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:22.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:14:22.612 00:14:22.612 --- 10.0.0.2 ping statistics --- 00:14:22.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.612 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:14:22.612 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:22.612 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:14:22.612 00:14:22.612 --- 10.0.0.3 ping statistics --- 00:14:22.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.612 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:14:22.612 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:22.612 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:14:22.612 00:14:22.612 --- 10.0.0.4 ping statistics --- 00:14:22.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.612 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # return 0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:22.612 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:22.613 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=65855 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 65855 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 65855 ']' 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:22.872 10:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.872 [2024-12-05 10:57:49.907627] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:14:22.872 [2024-12-05 10:57:49.907700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.130 [2024-12-05 10:57:50.061808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.130 [2024-12-05 10:57:50.111454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.130 [2024-12-05 10:57:50.111505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.130 [2024-12-05 10:57:50.111516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.130 [2024-12-05 10:57:50.111524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.130 [2024-12-05 10:57:50.111530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.130 [2024-12-05 10:57:50.112362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.130 [2024-12-05 10:57:50.112568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.130 [2024-12-05 10:57:50.113134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.130 [2024-12-05 10:57:50.113133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.130 [2024-12-05 10:57:50.175962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.711 [2024-12-05 10:57:50.839101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.711 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.970 Malloc0 00:14:23.970 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 [2024-12-05 10:57:50.917060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.971 test case1: single bdev can't be used in multiple subsystems 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 [2024-12-05 10:57:50.948867] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:23.971 [2024-12-05 10:57:50.948899] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:23.971 [2024-12-05 10:57:50.948909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:23.971 request: 00:14:23.971 { 00:14:23.971 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:23.971 "namespace": { 00:14:23.971 "bdev_name": "Malloc0", 00:14:23.971 "no_auto_visible": false, 00:14:23.971 "hide_metadata": false 00:14:23.971 }, 00:14:23.971 "method": "nvmf_subsystem_add_ns", 00:14:23.971 "req_id": 1 00:14:23.971 } 00:14:23.971 Got JSON-RPC error response 00:14:23.971 response: 00:14:23.971 { 00:14:23.971 "code": -32602, 00:14:23.971 "message": "Invalid parameters" 00:14:23.971 } 00:14:23.971 Adding namespace failed - expected result. 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:23.971 test case2: host connect to nvmf target in multiple paths 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:23.971 [2024-12-05 10:57:50.968981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.971 10:57:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.971 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:24.230 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:24.230 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:14:24.230 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.230 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:24.230 10:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:26.134 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:26.134 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:26.134 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.392 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:26.392 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.392 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:26.392 10:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:26.392 [global] 00:14:26.392 thread=1 00:14:26.392 invalidate=1 00:14:26.392 rw=write 00:14:26.392 time_based=1 00:14:26.392 runtime=1 00:14:26.392 ioengine=libaio 00:14:26.392 direct=1 00:14:26.392 bs=4096 00:14:26.392 iodepth=1 00:14:26.392 norandommap=0 00:14:26.392 numjobs=1 00:14:26.392 00:14:26.392 verify_dump=1 00:14:26.392 verify_backlog=512 00:14:26.392 verify_state_save=0 00:14:26.392 do_verify=1 00:14:26.392 verify=crc32c-intel 00:14:26.392 [job0] 00:14:26.392 filename=/dev/nvme0n1 00:14:26.392 Could not set queue depth (nvme0n1) 00:14:26.392 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:26.392 fio-3.35 00:14:26.392 Starting 1 thread 00:14:27.769 00:14:27.769 job0: (groupid=0, jobs=1): err= 0: pid=65941: Thu Dec 5 10:57:54 2024 00:14:27.769 read: IOPS=3965, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec) 00:14:27.769 slat (nsec): min=7403, max=74905, avg=9323.38, stdev=3368.05 00:14:27.769 clat (usec): min=102, max=683, avg=139.80, stdev=18.81 00:14:27.769 lat (usec): min=112, max=691, avg=149.12, stdev=19.61 00:14:27.769 clat percentiles (usec): 00:14:27.769 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 126], 00:14:27.769 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 143], 00:14:27.769 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 167], 00:14:27.769 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 265], 99.95th=[ 367], 00:14:27.769 | 99.99th=[ 685] 00:14:27.769 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:14:27.769 slat (usec): min=10, max=127, avg=14.33, stdev= 6.64 00:14:27.769 clat (usec): min=57, max=250, avg=83.35, stdev=11.65 00:14:27.769 lat (usec): min=74, max=378, avg=97.68, stdev=14.99 00:14:27.769 clat percentiles (usec): 00:14:27.769 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 74], 00:14:27.769 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 85], 00:14:27.769 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 97], 95.00th=[ 104], 00:14:27.769 | 99.00th=[ 118], 99.50th=[ 124], 99.90th=[ 155], 99.95th=[ 206], 00:14:27.769 | 99.99th=[ 251] 00:14:27.769 bw ( KiB/s): min=16318, max=16318, per=99.70%, avg=16318.00, stdev= 0.00, samples=1 00:14:27.769 iops : min= 4079, max= 4079, avg=4079.00, stdev= 0.00, samples=1 00:14:27.769 lat (usec) : 100=47.08%, 250=52.85%, 500=0.06%, 750=0.01% 00:14:27.769 cpu : usr=2.10%, sys=7.80%, ctx=8065, majf=0, minf=5 00:14:27.769 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:27.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.769 issued rwts: total=3969,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.769 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:27.769 00:14:27.769 Run status group 0 (all jobs): 00:14:27.769 READ: bw=15.5MiB/s (16.2MB/s), 15.5MiB/s-15.5MiB/s (16.2MB/s-16.2MB/s), io=15.5MiB (16.3MB), run=1001-1001msec 00:14:27.769 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:14:27.769 00:14:27.769 Disk stats (read/write): 00:14:27.769 nvme0n1: ios=3634/3619, merge=0/0, ticks=517/321, in_queue=838, util=91.27% 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:27.769 rmmod nvme_tcp 00:14:27.769 rmmod nvme_fabrics 00:14:27.769 rmmod nvme_keyring 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:14:27.769 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 65855 ']' 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 65855 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 65855 ']' 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 65855 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65855 00:14:27.770 killing process with pid 65855 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65855' 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 65855 00:14:27.770 10:57:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 65855 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:14:28.029 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:14:28.288 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:14:28.289 00:14:28.289 real 0m6.439s 00:14:28.289 user 0m18.548s 00:14:28.289 sys 0m3.088s 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.289 ************************************ 00:14:28.289 END TEST nvmf_nmic 00:14:28.289 ************************************ 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:28.289 ************************************ 00:14:28.289 START TEST nvmf_fio_target 00:14:28.289 ************************************ 00:14:28.289 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:28.549 * Looking for test storage... 00:14:28.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:28.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.549 --rc genhtml_branch_coverage=1 00:14:28.549 --rc genhtml_function_coverage=1 00:14:28.549 --rc genhtml_legend=1 00:14:28.549 --rc geninfo_all_blocks=1 00:14:28.549 --rc geninfo_unexecuted_blocks=1 00:14:28.549 00:14:28.549 ' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:28.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.549 --rc genhtml_branch_coverage=1 00:14:28.549 --rc genhtml_function_coverage=1 00:14:28.549 --rc genhtml_legend=1 00:14:28.549 --rc geninfo_all_blocks=1 00:14:28.549 --rc geninfo_unexecuted_blocks=1 00:14:28.549 00:14:28.549 ' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:28.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.549 --rc genhtml_branch_coverage=1 00:14:28.549 --rc genhtml_function_coverage=1 00:14:28.549 --rc genhtml_legend=1 00:14:28.549 --rc geninfo_all_blocks=1 00:14:28.549 --rc geninfo_unexecuted_blocks=1 00:14:28.549 00:14:28.549 ' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:28.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.549 --rc genhtml_branch_coverage=1 00:14:28.549 --rc genhtml_function_coverage=1 00:14:28.549 --rc genhtml_legend=1 00:14:28.549 --rc geninfo_all_blocks=1 00:14:28.549 --rc geninfo_unexecuted_blocks=1 00:14:28.549 00:14:28.549 ' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:28.549 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:14:28.549 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@223 -- # create_target_ns 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.550 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target0 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.809 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:14:28.810 10.0.0.1 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:14:28.810 10.0.0.2 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target1 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:14:28.810 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772163 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:14:29.071 10.0.0.3 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772164 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:14:29.071 10:57:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:14:29.071 10.0.0.4 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:29.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:14:29.071 00:14:29.071 --- 10.0.0.1 ping statistics --- 00:14:29.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.071 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:14:29.071 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:29.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:29.072 00:14:29.072 --- 10.0.0.2 ping statistics --- 00:14:29.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.072 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:14:29.072 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:29.072 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:14:29.072 00:14:29.072 --- 10.0.0.3 ping statistics --- 00:14:29.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.072 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:14:29.072 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:29.072 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:14:29.072 00:14:29.072 --- 10.0.0.4 ping statistics --- 00:14:29.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.072 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # return 0 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:29.072 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=66180 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 66180 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66180 ']' 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.330 10:57:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.330 [2024-12-05 10:57:56.411610] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:14:29.330 [2024-12-05 10:57:56.411679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.589 [2024-12-05 10:57:56.565860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.589 [2024-12-05 10:57:56.615163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.589 [2024-12-05 10:57:56.615575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.589 [2024-12-05 10:57:56.615836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.589 [2024-12-05 10:57:56.616113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.589 [2024-12-05 10:57:56.616210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.589 [2024-12-05 10:57:56.617318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.589 [2024-12-05 10:57:56.617500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.589 [2024-12-05 10:57:56.618164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.589 [2024-12-05 10:57:56.618161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.589 [2024-12-05 10:57:56.660063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.156 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.156 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:14:30.156 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:30.156 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:30.156 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.415 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.415 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:30.415 [2024-12-05 10:57:57.574781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.674 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.932 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:30.932 10:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.191 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:31.191 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.465 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:31.465 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:31.738 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:31.738 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:31.997 10:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.256 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:32.256 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.256 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:32.515 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.515 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:32.515 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:32.775 10:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:33.034 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:33.034 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:33.293 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:33.293 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:33.552 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.812 [2024-12-05 10:58:00.759413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.812 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:34.071 10:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:34.071 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.330 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:34.330 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:14:34.330 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.330 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:14:34.330 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:14:34.330 10:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:36.232 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:36.232 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:36.232 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.232 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:36.232 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.232 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:36.232 10:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:36.232 [global] 00:14:36.232 thread=1 00:14:36.232 invalidate=1 00:14:36.232 rw=write 00:14:36.232 time_based=1 00:14:36.232 runtime=1 00:14:36.232 ioengine=libaio 00:14:36.232 direct=1 00:14:36.232 bs=4096 00:14:36.233 iodepth=1 00:14:36.233 norandommap=0 00:14:36.233 numjobs=1 00:14:36.233 00:14:36.491 verify_dump=1 00:14:36.491 verify_backlog=512 00:14:36.491 verify_state_save=0 00:14:36.491 do_verify=1 00:14:36.491 verify=crc32c-intel 00:14:36.491 [job0] 00:14:36.491 filename=/dev/nvme0n1 00:14:36.491 [job1] 00:14:36.491 filename=/dev/nvme0n2 00:14:36.491 [job2] 00:14:36.491 filename=/dev/nvme0n3 00:14:36.491 [job3] 00:14:36.491 filename=/dev/nvme0n4 00:14:36.491 Could not set queue depth (nvme0n1) 00:14:36.491 Could not set queue depth (nvme0n2) 00:14:36.491 Could not set queue depth (nvme0n3) 00:14:36.491 Could not set queue depth (nvme0n4) 00:14:36.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.491 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.491 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.491 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.491 fio-3.35 00:14:36.491 Starting 4 threads 00:14:37.883 00:14:37.883 job0: (groupid=0, jobs=1): err= 0: pid=66359: Thu Dec 5 10:58:04 2024 00:14:37.883 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:37.883 slat (nsec): min=6265, max=39411, avg=13331.48, stdev=3774.01 00:14:37.883 clat (usec): min=207, max=1504, avg=320.27, stdev=44.62 00:14:37.883 lat (usec): min=225, max=1522, avg=333.60, stdev=45.27 00:14:37.883 clat percentiles (usec): 00:14:37.883 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:14:37.883 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 322], 00:14:37.883 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 359], 95.00th=[ 375], 00:14:37.883 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 963], 99.95th=[ 1500], 00:14:37.883 | 99.99th=[ 1500] 00:14:37.883 write: IOPS=1684, BW=6737KiB/s (6899kB/s)(6744KiB/1001msec); 0 zone resets 00:14:37.883 slat (usec): min=8, max=132, avg=23.10, stdev= 8.21 00:14:37.883 clat (usec): min=162, max=463, avg=263.42, stdev=30.23 00:14:37.883 lat (usec): min=186, max=496, avg=286.53, stdev=32.76 00:14:37.883 clat percentiles (usec): 00:14:37.883 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:14:37.883 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:14:37.883 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 318], 00:14:37.883 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 433], 99.95th=[ 465], 00:14:37.883 | 99.99th=[ 465] 00:14:37.883 bw ( KiB/s): min= 8192, max= 8192, per=31.82%, avg=8192.00, stdev= 0.00, samples=1 00:14:37.883 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:37.883 lat (usec) : 250=19.30%, 500=80.63%, 1000=0.03% 00:14:37.883 lat (msec) : 2=0.03% 00:14:37.883 cpu : usr=1.50%, sys=4.70%, ctx=3222, majf=0, minf=17 00:14:37.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.883 issued rwts: total=1536,1686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.883 job1: (groupid=0, jobs=1): err= 0: pid=66360: Thu Dec 5 10:58:04 2024 00:14:37.883 read: IOPS=1453, BW=5814KiB/s (5954kB/s)(5820KiB/1001msec) 00:14:37.883 slat (nsec): min=7412, max=74265, avg=18267.67, stdev=6685.15 00:14:37.883 clat (usec): min=137, max=1310, avg=338.79, stdev=76.05 00:14:37.883 lat (usec): min=145, max=1322, avg=357.06, stdev=79.38 00:14:37.883 clat percentiles (usec): 00:14:37.883 | 1.00th=[ 182], 5.00th=[ 219], 10.00th=[ 239], 20.00th=[ 285], 00:14:37.883 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 347], 00:14:37.883 | 70.00th=[ 367], 80.00th=[ 396], 90.00th=[ 437], 95.00th=[ 465], 00:14:37.883 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 627], 99.95th=[ 1319], 00:14:37.883 | 99.99th=[ 1319] 00:14:37.883 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:37.883 slat (usec): min=15, max=197, avg=36.10, stdev=12.83 00:14:37.883 clat (usec): min=97, max=653, avg=272.29, stdev=69.71 00:14:37.883 lat (usec): min=115, max=716, avg=308.39, stdev=78.48 00:14:37.883 clat percentiles (usec): 00:14:37.883 | 1.00th=[ 135], 5.00th=[ 161], 10.00th=[ 180], 20.00th=[ 206], 00:14:37.883 | 30.00th=[ 229], 40.00th=[ 255], 50.00th=[ 277], 60.00th=[ 293], 00:14:37.883 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 371], 95.00th=[ 383], 00:14:37.883 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 433], 99.95th=[ 652], 00:14:37.883 | 99.99th=[ 652] 00:14:37.883 bw ( KiB/s): min= 8192, max= 8192, per=31.82%, avg=8192.00, stdev= 0.00, samples=1 00:14:37.883 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:37.883 lat (usec) : 100=0.03%, 250=24.67%, 500=74.52%, 750=0.74% 00:14:37.883 lat (msec) : 2=0.03% 00:14:37.883 cpu : usr=1.70%, sys=7.30%, ctx=2991, majf=0, minf=10 00:14:37.883 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.883 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.883 issued rwts: total=1455,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.883 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.883 job2: (groupid=0, jobs=1): err= 0: pid=66361: Thu Dec 5 10:58:04 2024 00:14:37.883 read: IOPS=1363, BW=5455KiB/s (5585kB/s)(5460KiB/1001msec) 00:14:37.883 slat (usec): min=6, max=111, avg=23.60, stdev= 8.67 00:14:37.883 clat (usec): min=153, max=7183, avg=353.38, stdev=216.88 00:14:37.883 lat (usec): min=172, max=7194, avg=376.98, stdev=217.80 00:14:37.884 clat percentiles (usec): 00:14:37.884 | 1.00th=[ 192], 5.00th=[ 237], 10.00th=[ 265], 20.00th=[ 297], 00:14:37.884 | 30.00th=[ 310], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 347], 00:14:37.884 | 70.00th=[ 371], 80.00th=[ 400], 90.00th=[ 437], 95.00th=[ 465], 00:14:37.884 | 99.00th=[ 523], 99.50th=[ 914], 99.90th=[ 2540], 99.95th=[ 7177], 00:14:37.884 | 99.99th=[ 7177] 00:14:37.884 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:37.884 slat (usec): min=16, max=125, avg=31.87, stdev= 9.53 00:14:37.884 clat (usec): min=120, max=1080, avg=279.47, stdev=74.86 00:14:37.884 lat (usec): min=140, max=1104, avg=311.34, stdev=79.31 00:14:37.884 clat percentiles (usec): 00:14:37.884 | 1.00th=[ 130], 5.00th=[ 163], 10.00th=[ 184], 20.00th=[ 210], 00:14:37.884 | 30.00th=[ 237], 40.00th=[ 265], 50.00th=[ 285], 60.00th=[ 302], 00:14:37.884 | 70.00th=[ 318], 80.00th=[ 343], 90.00th=[ 379], 95.00th=[ 396], 00:14:37.884 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 799], 99.95th=[ 1074], 00:14:37.884 | 99.99th=[ 1074] 00:14:37.884 bw ( KiB/s): min= 8192, max= 8192, per=31.82%, avg=8192.00, stdev= 0.00, samples=1 00:14:37.884 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:37.884 lat (usec) : 250=21.41%, 500=77.80%, 750=0.48%, 1000=0.07% 00:14:37.884 lat (msec) : 2=0.14%, 4=0.07%, 10=0.03% 00:14:37.884 cpu : usr=2.50%, sys=6.60%, ctx=2911, majf=0, minf=8 00:14:37.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.884 issued rwts: total=1365,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.884 job3: (groupid=0, jobs=1): err= 0: pid=66362: Thu Dec 5 10:58:04 2024 00:14:37.884 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:14:37.884 slat (nsec): min=6903, max=38061, avg=12871.65, stdev=3800.13 00:14:37.884 clat (usec): min=260, max=1586, avg=321.23, stdev=45.80 00:14:37.884 lat (usec): min=275, max=1610, avg=334.10, stdev=46.54 00:14:37.884 clat percentiles (usec): 00:14:37.884 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 297], 00:14:37.884 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 322], 00:14:37.884 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 359], 95.00th=[ 371], 00:14:37.884 | 99.00th=[ 400], 99.50th=[ 420], 99.90th=[ 938], 99.95th=[ 1582], 00:14:37.884 | 99.99th=[ 1582] 00:14:37.884 write: IOPS=1682, BW=6729KiB/s (6891kB/s)(6736KiB/1001msec); 0 zone resets 00:14:37.884 slat (usec): min=14, max=121, avg=29.28, stdev= 9.04 00:14:37.884 clat (usec): min=133, max=380, avg=256.69, stdev=28.88 00:14:37.884 lat (usec): min=179, max=470, avg=285.97, stdev=31.78 00:14:37.884 clat percentiles (usec): 00:14:37.884 | 1.00th=[ 200], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 233], 00:14:37.884 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:14:37.884 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:14:37.884 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 379], 99.95th=[ 379], 00:14:37.884 | 99.99th=[ 379] 00:14:37.884 bw ( KiB/s): min= 8192, max= 8192, per=31.82%, avg=8192.00, stdev= 0.00, samples=1 00:14:37.884 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:37.884 lat (usec) : 250=24.10%, 500=75.84%, 1000=0.03% 00:14:37.884 lat (msec) : 2=0.03% 00:14:37.884 cpu : usr=1.70%, sys=5.80%, ctx=3222, majf=0, minf=15 00:14:37.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.884 issued rwts: total=1536,1684,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.884 00:14:37.884 Run status group 0 (all jobs): 00:14:37.884 READ: bw=23.0MiB/s (24.1MB/s), 5455KiB/s-6138KiB/s (5585kB/s-6285kB/s), io=23.0MiB (24.1MB), run=1001-1001msec 00:14:37.884 WRITE: bw=25.1MiB/s (26.4MB/s), 6138KiB/s-6737KiB/s (6285kB/s-6899kB/s), io=25.2MiB (26.4MB), run=1001-1001msec 00:14:37.884 00:14:37.884 Disk stats (read/write): 00:14:37.884 nvme0n1: ios=1306/1536, merge=0/0, ticks=419/353, in_queue=772, util=89.07% 00:14:37.884 nvme0n2: ios=1172/1536, merge=0/0, ticks=354/433, in_queue=787, util=89.08% 00:14:37.884 nvme0n3: ios=1062/1536, merge=0/0, ticks=434/409, in_queue=843, util=90.02% 00:14:37.884 nvme0n4: ios=1254/1536, merge=0/0, ticks=397/401, in_queue=798, util=89.75% 00:14:37.884 10:58:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:37.884 [global] 00:14:37.884 thread=1 00:14:37.884 invalidate=1 00:14:37.884 rw=randwrite 00:14:37.884 time_based=1 00:14:37.884 runtime=1 00:14:37.884 ioengine=libaio 00:14:37.884 direct=1 00:14:37.884 bs=4096 00:14:37.884 iodepth=1 00:14:37.884 norandommap=0 00:14:37.884 numjobs=1 00:14:37.884 00:14:37.884 verify_dump=1 00:14:37.884 verify_backlog=512 00:14:37.884 verify_state_save=0 00:14:37.884 do_verify=1 00:14:37.884 verify=crc32c-intel 00:14:37.884 [job0] 00:14:37.884 filename=/dev/nvme0n1 00:14:37.884 [job1] 00:14:37.884 filename=/dev/nvme0n2 00:14:37.884 [job2] 00:14:37.884 filename=/dev/nvme0n3 00:14:37.884 [job3] 00:14:37.884 filename=/dev/nvme0n4 00:14:37.884 Could not set queue depth (nvme0n1) 00:14:37.884 Could not set queue depth (nvme0n2) 00:14:37.884 Could not set queue depth (nvme0n3) 00:14:37.884 Could not set queue depth (nvme0n4) 00:14:38.141 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.141 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.141 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.141 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.141 fio-3.35 00:14:38.141 Starting 4 threads 00:14:39.518 00:14:39.518 job0: (groupid=0, jobs=1): err= 0: pid=66425: Thu Dec 5 10:58:06 2024 00:14:39.518 read: IOPS=2024, BW=8100KiB/s (8294kB/s)(8108KiB/1001msec) 00:14:39.518 slat (usec): min=7, max=101, avg=11.42, stdev= 5.87 00:14:39.518 clat (usec): min=143, max=2420, avg=261.57, stdev=67.50 00:14:39.518 lat (usec): min=151, max=2432, avg=272.99, stdev=68.60 00:14:39.518 clat percentiles (usec): 00:14:39.518 | 1.00th=[ 165], 5.00th=[ 184], 10.00th=[ 200], 20.00th=[ 225], 00:14:39.518 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 269], 00:14:39.518 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 338], 00:14:39.518 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 437], 99.95th=[ 848], 00:14:39.518 | 99.99th=[ 2409] 00:14:39.518 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:14:39.518 slat (usec): min=11, max=109, avg=20.51, stdev= 9.59 00:14:39.518 clat (usec): min=85, max=938, avg=194.72, stdev=44.43 00:14:39.518 lat (usec): min=99, max=970, avg=215.23, stdev=48.90 00:14:39.518 clat percentiles (usec): 00:14:39.518 | 1.00th=[ 111], 5.00th=[ 130], 10.00th=[ 147], 20.00th=[ 163], 00:14:39.518 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 202], 00:14:39.518 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 249], 95.00th=[ 265], 00:14:39.518 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 461], 99.95th=[ 685], 00:14:39.518 | 99.99th=[ 938] 00:14:39.518 bw ( KiB/s): min= 9472, max= 9472, per=31.40%, avg=9472.00, stdev= 0.00, samples=1 00:14:39.518 iops : min= 2368, max= 2368, avg=2368.00, stdev= 0.00, samples=1 00:14:39.518 lat (usec) : 100=0.22%, 250=67.71%, 500=31.98%, 750=0.02%, 1000=0.05% 00:14:39.518 lat (msec) : 4=0.02% 00:14:39.518 cpu : usr=1.70%, sys=5.20%, ctx=4075, majf=0, minf=9 00:14:39.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.518 issued rwts: total=2027,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.518 job1: (groupid=0, jobs=1): err= 0: pid=66426: Thu Dec 5 10:58:06 2024 00:14:39.518 read: IOPS=1357, BW=5431KiB/s (5561kB/s)(5436KiB/1001msec) 00:14:39.518 slat (nsec): min=7489, max=57876, avg=20140.30, stdev=8330.87 00:14:39.518 clat (usec): min=159, max=2643, avg=355.83, stdev=111.76 00:14:39.518 lat (usec): min=175, max=2690, avg=375.97, stdev=117.40 00:14:39.518 clat percentiles (usec): 00:14:39.518 | 1.00th=[ 202], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 269], 00:14:39.518 | 30.00th=[ 289], 40.00th=[ 318], 50.00th=[ 338], 60.00th=[ 371], 00:14:39.518 | 70.00th=[ 416], 80.00th=[ 449], 90.00th=[ 478], 95.00th=[ 498], 00:14:39.518 | 99.00th=[ 545], 99.50th=[ 611], 99.90th=[ 1074], 99.95th=[ 2638], 00:14:39.518 | 99.99th=[ 2638] 00:14:39.518 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:39.518 slat (usec): min=8, max=445, avg=31.14, stdev=17.22 00:14:39.518 clat (usec): min=88, max=2702, avg=282.91, stdev=155.85 00:14:39.518 lat (usec): min=104, max=2797, avg=314.05, stdev=162.17 00:14:39.518 clat percentiles (usec): 00:14:39.518 | 1.00th=[ 109], 5.00th=[ 141], 10.00th=[ 161], 20.00th=[ 198], 00:14:39.518 | 30.00th=[ 221], 40.00th=[ 241], 50.00th=[ 265], 60.00th=[ 293], 00:14:39.518 | 70.00th=[ 322], 80.00th=[ 355], 90.00th=[ 392], 95.00th=[ 433], 00:14:39.518 | 99.00th=[ 510], 99.50th=[ 1369], 99.90th=[ 2180], 99.95th=[ 2704], 00:14:39.518 | 99.99th=[ 2704] 00:14:39.518 bw ( KiB/s): min= 7888, max= 7888, per=26.15%, avg=7888.00, stdev= 0.00, samples=1 00:14:39.518 iops : min= 1972, max= 1972, avg=1972.00, stdev= 0.00, samples=1 00:14:39.518 lat (usec) : 100=0.21%, 250=29.19%, 500=67.70%, 750=2.35%, 1000=0.14% 00:14:39.518 lat (msec) : 2=0.28%, 4=0.14% 00:14:39.518 cpu : usr=2.00%, sys=6.40%, ctx=2909, majf=0, minf=20 00:14:39.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.518 issued rwts: total=1359,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.518 job2: (groupid=0, jobs=1): err= 0: pid=66428: Thu Dec 5 10:58:06 2024 00:14:39.518 read: IOPS=1256, BW=5027KiB/s (5148kB/s)(5032KiB/1001msec) 00:14:39.518 slat (usec): min=5, max=203, avg=18.25, stdev= 9.49 00:14:39.518 clat (usec): min=184, max=1368, avg=372.50, stdev=99.64 00:14:39.518 lat (usec): min=205, max=1572, avg=390.75, stdev=105.14 00:14:39.518 clat percentiles (usec): 00:14:39.518 | 1.00th=[ 217], 5.00th=[ 233], 10.00th=[ 245], 20.00th=[ 269], 00:14:39.518 | 30.00th=[ 297], 40.00th=[ 343], 50.00th=[ 379], 60.00th=[ 412], 00:14:39.518 | 70.00th=[ 441], 80.00th=[ 465], 90.00th=[ 490], 95.00th=[ 506], 00:14:39.518 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 1205], 99.95th=[ 1369], 00:14:39.518 | 99.99th=[ 1369] 00:14:39.518 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:14:39.518 slat (usec): min=11, max=284, avg=34.51, stdev=15.98 00:14:39.518 clat (usec): min=82, max=4012, avg=292.26, stdev=141.76 00:14:39.518 lat (usec): min=132, max=4038, avg=326.77, stdev=145.48 00:14:39.518 clat percentiles (usec): 00:14:39.518 | 1.00th=[ 135], 5.00th=[ 167], 10.00th=[ 190], 20.00th=[ 217], 00:14:39.518 | 30.00th=[ 235], 40.00th=[ 262], 50.00th=[ 289], 60.00th=[ 310], 00:14:39.518 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 412], 00:14:39.518 | 99.00th=[ 469], 99.50th=[ 725], 99.90th=[ 2180], 99.95th=[ 4015], 00:14:39.518 | 99.99th=[ 4015] 00:14:39.518 bw ( KiB/s): min= 8192, max= 8192, per=27.16%, avg=8192.00, stdev= 0.00, samples=1 00:14:39.518 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:14:39.518 lat (usec) : 100=0.04%, 250=25.66%, 500=70.90%, 750=3.08%, 1000=0.07% 00:14:39.518 lat (msec) : 2=0.18%, 4=0.04%, 10=0.04% 00:14:39.518 cpu : usr=2.10%, sys=6.10%, ctx=2824, majf=0, minf=7 00:14:39.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.518 issued rwts: total=1258,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.518 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.518 job3: (groupid=0, jobs=1): err= 0: pid=66429: Thu Dec 5 10:58:06 2024 00:14:39.518 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:14:39.518 slat (nsec): min=8883, max=93935, avg=12433.96, stdev=5033.14 00:14:39.518 clat (usec): min=141, max=2240, avg=234.49, stdev=70.43 00:14:39.518 lat (usec): min=153, max=2250, avg=246.92, stdev=72.71 00:14:39.518 clat percentiles (usec): 00:14:39.518 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:14:39.518 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:14:39.518 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 293], 95.00th=[ 347], 00:14:39.518 | 99.00th=[ 429], 99.50th=[ 486], 99.90th=[ 750], 99.95th=[ 1139], 00:14:39.518 | 99.99th=[ 2245] 00:14:39.518 write: IOPS=2425, BW=9702KiB/s (9935kB/s)(9712KiB/1001msec); 0 zone resets 00:14:39.518 slat (usec): min=8, max=130, avg=20.61, stdev=10.29 00:14:39.518 clat (usec): min=90, max=532, avg=180.63, stdev=53.16 00:14:39.518 lat (usec): min=109, max=575, avg=201.25, stdev=60.64 00:14:39.518 clat percentiles (usec): 00:14:39.518 | 1.00th=[ 109], 5.00th=[ 125], 10.00th=[ 133], 20.00th=[ 141], 00:14:39.518 | 30.00th=[ 149], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 174], 00:14:39.518 | 70.00th=[ 184], 80.00th=[ 208], 90.00th=[ 269], 95.00th=[ 293], 00:14:39.518 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 404], 99.95th=[ 515], 00:14:39.518 | 99.99th=[ 529] 00:14:39.518 bw ( KiB/s): min= 9336, max= 9336, per=30.95%, avg=9336.00, stdev= 0.00, samples=1 00:14:39.518 iops : min= 2334, max= 2334, avg=2334.00, stdev= 0.00, samples=1 00:14:39.518 lat (usec) : 100=0.18%, 250=83.33%, 500=16.26%, 750=0.16%, 1000=0.02% 00:14:39.518 lat (msec) : 2=0.02%, 4=0.02% 00:14:39.518 cpu : usr=1.50%, sys=5.80%, ctx=4478, majf=0, minf=12 00:14:39.518 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.518 issued rwts: total=2048,2428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.519 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.519 00:14:39.519 Run status group 0 (all jobs): 00:14:39.519 READ: bw=26.1MiB/s (27.4MB/s), 5027KiB/s-8184KiB/s (5148kB/s-8380kB/s), io=26.1MiB (27.4MB), run=1001-1001msec 00:14:39.519 WRITE: bw=29.5MiB/s (30.9MB/s), 6138KiB/s-9702KiB/s (6285kB/s-9935kB/s), io=29.5MiB (30.9MB), run=1001-1001msec 00:14:39.519 00:14:39.519 Disk stats (read/write): 00:14:39.519 nvme0n1: ios=1586/1989, merge=0/0, ticks=438/396, in_queue=834, util=86.86% 00:14:39.519 nvme0n2: ios=1073/1271, merge=0/0, ticks=408/363, in_queue=771, util=86.36% 00:14:39.519 nvme0n3: ios=1024/1340, merge=0/0, ticks=363/396, in_queue=759, util=88.77% 00:14:39.519 nvme0n4: ios=1903/2048, merge=0/0, ticks=440/366, in_queue=806, util=89.53% 00:14:39.519 10:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:39.519 [global] 00:14:39.519 thread=1 00:14:39.519 invalidate=1 00:14:39.519 rw=write 00:14:39.519 time_based=1 00:14:39.519 runtime=1 00:14:39.519 ioengine=libaio 00:14:39.519 direct=1 00:14:39.519 bs=4096 00:14:39.519 iodepth=128 00:14:39.519 norandommap=0 00:14:39.519 numjobs=1 00:14:39.519 00:14:39.519 verify_dump=1 00:14:39.519 verify_backlog=512 00:14:39.519 verify_state_save=0 00:14:39.519 do_verify=1 00:14:39.519 verify=crc32c-intel 00:14:39.519 [job0] 00:14:39.519 filename=/dev/nvme0n1 00:14:39.519 [job1] 00:14:39.519 filename=/dev/nvme0n2 00:14:39.519 [job2] 00:14:39.519 filename=/dev/nvme0n3 00:14:39.519 [job3] 00:14:39.519 filename=/dev/nvme0n4 00:14:39.519 Could not set queue depth (nvme0n1) 00:14:39.519 Could not set queue depth (nvme0n2) 00:14:39.519 Could not set queue depth (nvme0n3) 00:14:39.519 Could not set queue depth (nvme0n4) 00:14:39.519 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.519 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.519 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.519 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:39.519 fio-3.35 00:14:39.519 Starting 4 threads 00:14:40.918 00:14:40.918 job0: (groupid=0, jobs=1): err= 0: pid=66484: Thu Dec 5 10:58:07 2024 00:14:40.918 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:14:40.918 slat (usec): min=5, max=6139, avg=154.26, stdev=728.23 00:14:40.918 clat (usec): min=13426, max=25274, avg=20307.96, stdev=1971.67 00:14:40.918 lat (usec): min=15112, max=25551, avg=20462.22, stdev=1858.90 00:14:40.918 clat percentiles (usec): 00:14:40.918 | 1.00th=[15533], 5.00th=[17433], 10.00th=[18482], 20.00th=[18744], 00:14:40.918 | 30.00th=[19268], 40.00th=[19530], 50.00th=[20055], 60.00th=[20579], 00:14:40.918 | 70.00th=[21103], 80.00th=[21627], 90.00th=[23462], 95.00th=[23987], 00:14:40.918 | 99.00th=[25035], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:14:40.918 | 99.99th=[25297] 00:14:40.918 write: IOPS=3192, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1003msec); 0 zone resets 00:14:40.918 slat (usec): min=7, max=5715, avg=152.30, stdev=676.51 00:14:40.918 clat (usec): min=2887, max=26737, avg=19981.87, stdev=2927.51 00:14:40.918 lat (usec): min=2918, max=26780, avg=20134.17, stdev=2873.07 00:14:40.918 clat percentiles (usec): 00:14:40.918 | 1.00th=[ 8455], 5.00th=[16909], 10.00th=[17957], 20.00th=[18744], 00:14:40.918 | 30.00th=[19268], 40.00th=[19530], 50.00th=[20055], 60.00th=[20317], 00:14:40.918 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22676], 95.00th=[25035], 00:14:40.918 | 99.00th=[26608], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:14:40.918 | 99.99th=[26608] 00:14:40.919 bw ( KiB/s): min=12263, max=12312, per=29.10%, avg=12287.50, stdev=34.65, samples=2 00:14:40.919 iops : min= 3065, max= 3078, avg=3071.50, stdev= 9.19, samples=2 00:14:40.919 lat (msec) : 4=0.45%, 10=0.57%, 20=49.97%, 50=49.01% 00:14:40.919 cpu : usr=2.89%, sys=13.37%, ctx=252, majf=0, minf=5 00:14:40.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:40.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.919 issued rwts: total=3072,3202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.919 job1: (groupid=0, jobs=1): err= 0: pid=66485: Thu Dec 5 10:58:07 2024 00:14:40.919 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:14:40.919 slat (usec): min=9, max=5476, avg=151.29, stdev=600.51 00:14:40.919 clat (usec): min=14149, max=27657, avg=19828.81, stdev=1989.13 00:14:40.919 lat (usec): min=14179, max=27706, avg=19980.10, stdev=2052.33 00:14:40.919 clat percentiles (usec): 00:14:40.919 | 1.00th=[15008], 5.00th=[16188], 10.00th=[16909], 20.00th=[18482], 00:14:40.919 | 30.00th=[19006], 40.00th=[19530], 50.00th=[19792], 60.00th=[20317], 00:14:40.919 | 70.00th=[20841], 80.00th=[21103], 90.00th=[22152], 95.00th=[23200], 00:14:40.919 | 99.00th=[25560], 99.50th=[25560], 99.90th=[26608], 99.95th=[26870], 00:14:40.919 | 99.99th=[27657] 00:14:40.919 write: IOPS=3316, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1002msec); 0 zone resets 00:14:40.919 slat (usec): min=18, max=6691, avg=148.31, stdev=632.93 00:14:40.919 clat (usec): min=244, max=27262, avg=19677.81, stdev=2586.57 00:14:40.919 lat (usec): min=6623, max=27356, avg=19826.12, stdev=2631.31 00:14:40.919 clat percentiles (usec): 00:14:40.919 | 1.00th=[ 7832], 5.00th=[15664], 10.00th=[16909], 20.00th=[18220], 00:14:40.919 | 30.00th=[19268], 40.00th=[19792], 50.00th=[19792], 60.00th=[20317], 00:14:40.919 | 70.00th=[20579], 80.00th=[21365], 90.00th=[21890], 95.00th=[23462], 00:14:40.919 | 99.00th=[25822], 99.50th=[26346], 99.90th=[27132], 99.95th=[27132], 00:14:40.919 | 99.99th=[27132] 00:14:40.919 bw ( KiB/s): min=12263, max=12263, per=29.04%, avg=12263.00, stdev= 0.00, samples=1 00:14:40.919 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:14:40.919 lat (usec) : 250=0.02% 00:14:40.919 lat (msec) : 10=0.66%, 20=53.18%, 50=46.15% 00:14:40.919 cpu : usr=5.09%, sys=13.39%, ctx=328, majf=0, minf=6 00:14:40.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:40.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.919 issued rwts: total=3072,3323,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.919 job2: (groupid=0, jobs=1): err= 0: pid=66486: Thu Dec 5 10:58:07 2024 00:14:40.919 read: IOPS=1841, BW=7367KiB/s (7544kB/s)(7404KiB/1005msec) 00:14:40.919 slat (usec): min=7, max=10797, avg=258.19, stdev=1185.92 00:14:40.919 clat (usec): min=1434, max=63422, avg=32198.70, stdev=7337.28 00:14:40.919 lat (usec): min=11300, max=63449, avg=32456.90, stdev=7387.95 00:14:40.919 clat percentiles (usec): 00:14:40.919 | 1.00th=[11469], 5.00th=[22676], 10.00th=[25035], 20.00th=[28705], 00:14:40.919 | 30.00th=[28967], 40.00th=[29492], 50.00th=[30540], 60.00th=[32113], 00:14:40.919 | 70.00th=[34866], 80.00th=[39060], 90.00th=[41157], 95.00th=[43254], 00:14:40.919 | 99.00th=[54264], 99.50th=[57934], 99.90th=[63177], 99.95th=[63177], 00:14:40.919 | 99.99th=[63177] 00:14:40.919 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:14:40.919 slat (usec): min=8, max=11922, avg=244.41, stdev=1150.56 00:14:40.919 clat (usec): min=12545, max=92152, avg=32666.02, stdev=18439.30 00:14:40.919 lat (usec): min=13648, max=92185, avg=32910.43, stdev=18569.43 00:14:40.919 clat percentiles (usec): 00:14:40.919 | 1.00th=[16319], 5.00th=[19530], 10.00th=[19792], 20.00th=[20579], 00:14:40.919 | 30.00th=[21365], 40.00th=[22414], 50.00th=[23200], 60.00th=[24511], 00:14:40.919 | 70.00th=[29230], 80.00th=[51119], 90.00th=[65274], 95.00th=[72877], 00:14:40.919 | 99.00th=[84411], 99.50th=[87557], 99.90th=[91751], 99.95th=[91751], 00:14:40.919 | 99.99th=[91751] 00:14:40.919 bw ( KiB/s): min= 6840, max= 9524, per=19.37%, avg=8182.00, stdev=1897.87, samples=2 00:14:40.919 iops : min= 1710, max= 2381, avg=2045.50, stdev=474.47, samples=2 00:14:40.919 lat (msec) : 2=0.03%, 20=9.72%, 50=77.66%, 100=12.59% 00:14:40.919 cpu : usr=2.39%, sys=8.27%, ctx=169, majf=0, minf=15 00:14:40.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:40.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.919 issued rwts: total=1851,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.919 job3: (groupid=0, jobs=1): err= 0: pid=66487: Thu Dec 5 10:58:07 2024 00:14:40.919 read: IOPS=1782, BW=7129KiB/s (7300kB/s)(7172KiB/1006msec) 00:14:40.919 slat (usec): min=5, max=21947, avg=313.99, stdev=1827.06 00:14:40.919 clat (usec): min=681, max=82297, avg=38446.06, stdev=15305.03 00:14:40.919 lat (usec): min=11189, max=82307, avg=38760.04, stdev=15311.88 00:14:40.919 clat percentiles (usec): 00:14:40.919 | 1.00th=[11731], 5.00th=[21627], 10.00th=[24511], 20.00th=[25297], 00:14:40.919 | 30.00th=[27132], 40.00th=[32900], 50.00th=[37487], 60.00th=[39060], 00:14:40.919 | 70.00th=[40109], 80.00th=[44303], 90.00th=[65274], 95.00th=[72877], 00:14:40.919 | 99.00th=[82314], 99.50th=[82314], 99.90th=[82314], 99.95th=[82314], 00:14:40.919 | 99.99th=[82314] 00:14:40.919 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:14:40.919 slat (usec): min=11, max=12579, avg=206.09, stdev=1113.53 00:14:40.919 clat (usec): min=15727, max=59436, avg=28006.71, stdev=9152.28 00:14:40.919 lat (usec): min=19517, max=59451, avg=28212.79, stdev=9122.56 00:14:40.919 clat percentiles (usec): 00:14:40.919 | 1.00th=[17433], 5.00th=[19530], 10.00th=[20055], 20.00th=[20579], 00:14:40.919 | 30.00th=[21627], 40.00th=[22676], 50.00th=[26084], 60.00th=[26608], 00:14:40.919 | 70.00th=[27919], 80.00th=[34866], 90.00th=[45351], 95.00th=[47449], 00:14:40.919 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:14:40.919 | 99.99th=[59507] 00:14:40.919 bw ( KiB/s): min= 7912, max= 8440, per=19.36%, avg=8176.00, stdev=373.35, samples=2 00:14:40.919 iops : min= 1978, max= 2110, avg=2044.00, stdev=93.34, samples=2 00:14:40.919 lat (usec) : 750=0.03% 00:14:40.919 lat (msec) : 20=5.94%, 50=84.98%, 100=9.06% 00:14:40.919 cpu : usr=2.29%, sys=5.97%, ctx=122, majf=0, minf=7 00:14:40.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:14:40.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.919 issued rwts: total=1793,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.919 00:14:40.919 Run status group 0 (all jobs): 00:14:40.919 READ: bw=38.0MiB/s (39.9MB/s), 7129KiB/s-12.0MiB/s (7300kB/s-12.6MB/s), io=38.2MiB (40.1MB), run=1002-1006msec 00:14:40.919 WRITE: bw=41.2MiB/s (43.2MB/s), 8143KiB/s-13.0MiB/s (8339kB/s-13.6MB/s), io=41.5MiB (43.5MB), run=1002-1006msec 00:14:40.919 00:14:40.919 Disk stats (read/write): 00:14:40.919 nvme0n1: ios=2610/2818, merge=0/0, ticks=12288/12622, in_queue=24910, util=89.18% 00:14:40.919 nvme0n2: ios=2605/2951, merge=0/0, ticks=16205/16821, in_queue=33026, util=88.89% 00:14:40.919 nvme0n3: ios=1553/1943, merge=0/0, ticks=24651/25689, in_queue=50340, util=88.92% 00:14:40.919 nvme0n4: ios=1536/1696, merge=0/0, ticks=15100/11006, in_queue=26106, util=89.78% 00:14:40.919 10:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:40.919 [global] 00:14:40.919 thread=1 00:14:40.919 invalidate=1 00:14:40.919 rw=randwrite 00:14:40.919 time_based=1 00:14:40.919 runtime=1 00:14:40.919 ioengine=libaio 00:14:40.919 direct=1 00:14:40.919 bs=4096 00:14:40.919 iodepth=128 00:14:40.919 norandommap=0 00:14:40.920 numjobs=1 00:14:40.920 00:14:40.920 verify_dump=1 00:14:40.920 verify_backlog=512 00:14:40.920 verify_state_save=0 00:14:40.920 do_verify=1 00:14:40.920 verify=crc32c-intel 00:14:40.920 [job0] 00:14:40.920 filename=/dev/nvme0n1 00:14:40.920 [job1] 00:14:40.920 filename=/dev/nvme0n2 00:14:40.920 [job2] 00:14:40.920 filename=/dev/nvme0n3 00:14:40.920 [job3] 00:14:40.920 filename=/dev/nvme0n4 00:14:40.920 Could not set queue depth (nvme0n1) 00:14:40.920 Could not set queue depth (nvme0n2) 00:14:40.920 Could not set queue depth (nvme0n3) 00:14:40.920 Could not set queue depth (nvme0n4) 00:14:40.920 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.920 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.920 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.920 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.920 fio-3.35 00:14:40.920 Starting 4 threads 00:14:42.294 00:14:42.294 job0: (groupid=0, jobs=1): err= 0: pid=66540: Thu Dec 5 10:58:09 2024 00:14:42.294 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:14:42.294 slat (usec): min=9, max=9155, avg=155.37, stdev=822.30 00:14:42.294 clat (usec): min=11702, max=29631, avg=19500.75, stdev=2267.98 00:14:42.294 lat (usec): min=11825, max=32947, avg=19656.12, stdev=2358.32 00:14:42.295 clat percentiles (usec): 00:14:42.295 | 1.00th=[13042], 5.00th=[16057], 10.00th=[17433], 20.00th=[17957], 00:14:42.295 | 30.00th=[18744], 40.00th=[19006], 50.00th=[19530], 60.00th=[19792], 00:14:42.295 | 70.00th=[20317], 80.00th=[20841], 90.00th=[22152], 95.00th=[23725], 00:14:42.295 | 99.00th=[26608], 99.50th=[26870], 99.90th=[28705], 99.95th=[29230], 00:14:42.295 | 99.99th=[29754] 00:14:42.295 write: IOPS=3460, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1006msec); 0 zone resets 00:14:42.295 slat (usec): min=22, max=9167, avg=139.08, stdev=722.09 00:14:42.295 clat (usec): min=298, max=30499, avg=19244.63, stdev=3060.55 00:14:42.295 lat (usec): min=5428, max=30559, avg=19383.71, stdev=3121.57 00:14:42.295 clat percentiles (usec): 00:14:42.295 | 1.00th=[ 6718], 5.00th=[15008], 10.00th=[16057], 20.00th=[17433], 00:14:42.295 | 30.00th=[18220], 40.00th=[18744], 50.00th=[19530], 60.00th=[20055], 00:14:42.295 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22414], 95.00th=[23200], 00:14:42.295 | 99.00th=[27395], 99.50th=[28181], 99.90th=[29230], 99.95th=[30016], 00:14:42.295 | 99.99th=[30540] 00:14:42.295 bw ( KiB/s): min=13082, max=13768, per=25.41%, avg=13425.00, stdev=485.08, samples=2 00:14:42.295 iops : min= 3270, max= 3442, avg=3356.00, stdev=121.62, samples=2 00:14:42.295 lat (usec) : 500=0.02% 00:14:42.295 lat (msec) : 10=1.07%, 20=61.28%, 50=37.63% 00:14:42.295 cpu : usr=3.98%, sys=14.63%, ctx=271, majf=0, minf=8 00:14:42.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:42.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.295 issued rwts: total=3072,3481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.295 job1: (groupid=0, jobs=1): err= 0: pid=66541: Thu Dec 5 10:58:09 2024 00:14:42.295 read: IOPS=3113, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1007msec) 00:14:42.295 slat (usec): min=17, max=9165, avg=139.77, stdev=884.26 00:14:42.295 clat (usec): min=899, max=30464, avg=19429.96, stdev=2844.41 00:14:42.295 lat (usec): min=6254, max=36368, avg=19569.73, stdev=2881.55 00:14:42.295 clat percentiles (usec): 00:14:42.295 | 1.00th=[ 7046], 5.00th=[13829], 10.00th=[17433], 20.00th=[18482], 00:14:42.295 | 30.00th=[18744], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:14:42.295 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21365], 95.00th=[21627], 00:14:42.295 | 99.00th=[29230], 99.50th=[29754], 99.90th=[30540], 99.95th=[30540], 00:14:42.295 | 99.99th=[30540] 00:14:42.295 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:14:42.295 slat (usec): min=10, max=16955, avg=146.85, stdev=903.97 00:14:42.295 clat (usec): min=7838, max=30167, avg=18575.35, stdev=2811.42 00:14:42.295 lat (usec): min=7892, max=30202, avg=18722.20, stdev=2701.32 00:14:42.295 clat percentiles (usec): 00:14:42.295 | 1.00th=[12387], 5.00th=[15008], 10.00th=[15401], 20.00th=[16319], 00:14:42.295 | 30.00th=[17171], 40.00th=[17695], 50.00th=[18220], 60.00th=[19006], 00:14:42.295 | 70.00th=[19530], 80.00th=[20841], 90.00th=[22152], 95.00th=[23200], 00:14:42.295 | 99.00th=[29492], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:14:42.295 | 99.99th=[30278] 00:14:42.295 bw ( KiB/s): min=12817, max=15360, per=26.67%, avg=14088.50, stdev=1798.17, samples=2 00:14:42.295 iops : min= 3204, max= 3840, avg=3522.00, stdev=449.72, samples=2 00:14:42.295 lat (usec) : 1000=0.01% 00:14:42.295 lat (msec) : 10=1.03%, 20=65.71%, 50=33.25% 00:14:42.295 cpu : usr=4.08%, sys=13.32%, ctx=137, majf=0, minf=5 00:14:42.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:14:42.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.295 issued rwts: total=3135,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.295 job2: (groupid=0, jobs=1): err= 0: pid=66542: Thu Dec 5 10:58:09 2024 00:14:42.295 read: IOPS=2800, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1006msec) 00:14:42.295 slat (usec): min=5, max=5911, avg=163.15, stdev=796.74 00:14:42.295 clat (usec): min=1218, max=25895, avg=21365.16, stdev=2409.68 00:14:42.295 lat (usec): min=6831, max=25914, avg=21528.31, stdev=2268.82 00:14:42.295 clat percentiles (usec): 00:14:42.295 | 1.00th=[ 7570], 5.00th=[18482], 10.00th=[19792], 20.00th=[20579], 00:14:42.295 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:14:42.295 | 70.00th=[22152], 80.00th=[22676], 90.00th=[23987], 95.00th=[24249], 00:14:42.295 | 99.00th=[25560], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:14:42.295 | 99.99th=[25822] 00:14:42.295 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:14:42.295 slat (usec): min=7, max=9584, avg=165.89, stdev=774.99 00:14:42.295 clat (usec): min=14743, max=32365, avg=21696.58, stdev=2709.23 00:14:42.295 lat (usec): min=14867, max=32428, avg=21862.47, stdev=2622.28 00:14:42.295 clat percentiles (usec): 00:14:42.295 | 1.00th=[16319], 5.00th=[19530], 10.00th=[19792], 20.00th=[20055], 00:14:42.295 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:14:42.295 | 70.00th=[22414], 80.00th=[22938], 90.00th=[25297], 95.00th=[28967], 00:14:42.295 | 99.00th=[32113], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:14:42.295 | 99.99th=[32375] 00:14:42.295 bw ( KiB/s): min=12288, max=12288, per=23.26%, avg=12288.00, stdev= 0.00, samples=2 00:14:42.295 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:14:42.295 lat (msec) : 2=0.02%, 10=0.54%, 20=16.30%, 50=83.14% 00:14:42.295 cpu : usr=3.68%, sys=11.44%, ctx=222, majf=0, minf=5 00:14:42.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:42.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.295 issued rwts: total=2817,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.295 job3: (groupid=0, jobs=1): err= 0: pid=66543: Thu Dec 5 10:58:09 2024 00:14:42.295 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:14:42.295 slat (usec): min=5, max=6438, avg=157.28, stdev=638.16 00:14:42.295 clat (usec): min=15851, max=27245, avg=20748.94, stdev=1589.14 00:14:42.295 lat (usec): min=15902, max=27302, avg=20906.22, stdev=1667.25 00:14:42.295 clat percentiles (usec): 00:14:42.295 | 1.00th=[16909], 5.00th=[18220], 10.00th=[19268], 20.00th=[19792], 00:14:42.295 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841], 00:14:42.295 | 70.00th=[21103], 80.00th=[21627], 90.00th=[23200], 95.00th=[23725], 00:14:42.295 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26608], 99.95th=[26870], 00:14:42.295 | 99.99th=[27132] 00:14:42.295 write: IOPS=3150, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1004msec); 0 zone resets 00:14:42.295 slat (usec): min=16, max=8971, avg=152.04, stdev=697.55 00:14:42.295 clat (usec): min=700, max=27818, avg=19881.87, stdev=2677.39 00:14:42.295 lat (usec): min=5857, max=27904, avg=20033.91, stdev=2746.57 00:14:42.295 clat percentiles (usec): 00:14:42.295 | 1.00th=[ 7046], 5.00th=[15926], 10.00th=[17433], 20.00th=[18220], 00:14:42.295 | 30.00th=[19006], 40.00th=[20055], 50.00th=[20579], 60.00th=[20579], 00:14:42.295 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22152], 95.00th=[23200], 00:14:42.295 | 99.00th=[26084], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:14:42.295 | 99.99th=[27919] 00:14:42.295 bw ( KiB/s): min=12200, max=12400, per=23.28%, avg=12300.00, stdev=141.42, samples=2 00:14:42.295 iops : min= 3050, max= 3100, avg=3075.00, stdev=35.36, samples=2 00:14:42.295 lat (usec) : 750=0.02% 00:14:42.295 lat (msec) : 10=0.67%, 20=31.21%, 50=68.10% 00:14:42.295 cpu : usr=4.29%, sys=12.46%, ctx=276, majf=0, minf=3 00:14:42.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:42.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:42.295 issued rwts: total=3072,3163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.295 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:42.295 00:14:42.295 Run status group 0 (all jobs): 00:14:42.295 READ: bw=46.9MiB/s (49.2MB/s), 10.9MiB/s-12.2MiB/s (11.5MB/s-12.8MB/s), io=47.2MiB (49.5MB), run=1004-1007msec 00:14:42.295 WRITE: bw=51.6MiB/s (54.1MB/s), 11.9MiB/s-13.9MiB/s (12.5MB/s-14.6MB/s), io=52.0MiB (54.5MB), run=1004-1007msec 00:14:42.295 00:14:42.295 Disk stats (read/write): 00:14:42.295 nvme0n1: ios=2610/3072, merge=0/0, ticks=22997/26276, in_queue=49273, util=89.37% 00:14:42.295 nvme0n2: ios=2677/3072, merge=0/0, ticks=48613/52755, in_queue=101368, util=89.49% 00:14:42.295 nvme0n3: ios=2580/2560, merge=0/0, ticks=12454/12365, in_queue=24819, util=90.68% 00:14:42.295 nvme0n4: ios=2581/2858, merge=0/0, ticks=17143/16003, in_queue=33146, util=90.11% 00:14:42.295 10:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:42.295 10:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66562 00:14:42.295 10:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:42.295 10:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:42.295 [global] 00:14:42.295 thread=1 00:14:42.295 invalidate=1 00:14:42.295 rw=read 00:14:42.295 time_based=1 00:14:42.295 runtime=10 00:14:42.295 ioengine=libaio 00:14:42.295 direct=1 00:14:42.295 bs=4096 00:14:42.295 iodepth=1 00:14:42.295 norandommap=1 00:14:42.295 numjobs=1 00:14:42.295 00:14:42.295 [job0] 00:14:42.295 filename=/dev/nvme0n1 00:14:42.295 [job1] 00:14:42.295 filename=/dev/nvme0n2 00:14:42.295 [job2] 00:14:42.295 filename=/dev/nvme0n3 00:14:42.295 [job3] 00:14:42.295 filename=/dev/nvme0n4 00:14:42.295 Could not set queue depth (nvme0n1) 00:14:42.295 Could not set queue depth (nvme0n2) 00:14:42.295 Could not set queue depth (nvme0n3) 00:14:42.295 Could not set queue depth (nvme0n4) 00:14:42.295 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:42.295 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:42.295 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:42.295 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:42.295 fio-3.35 00:14:42.296 Starting 4 threads 00:14:45.581 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:45.581 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=28602368, buflen=4096 00:14:45.581 fio: pid=66610, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:45.581 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:45.581 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=33722368, buflen=4096 00:14:45.581 fio: pid=66609, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:45.581 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:45.581 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:45.839 fio: pid=66606, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:45.839 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47546368, buflen=4096 00:14:45.839 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:45.839 10:58:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:46.098 fio: pid=66607, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:46.098 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52690944, buflen=4096 00:14:46.098 00:14:46.098 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66606: Thu Dec 5 10:58:13 2024 00:14:46.098 read: IOPS=3551, BW=13.9MiB/s (14.5MB/s)(45.3MiB/3269msec) 00:14:46.098 slat (usec): min=7, max=13962, avg=18.01, stdev=224.62 00:14:46.098 clat (usec): min=113, max=2220, avg=262.17, stdev=59.42 00:14:46.098 lat (usec): min=123, max=14133, avg=280.17, stdev=232.15 00:14:46.098 clat percentiles (usec): 00:14:46.098 | 1.00th=[ 147], 5.00th=[ 165], 10.00th=[ 184], 20.00th=[ 223], 00:14:46.098 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 269], 60.00th=[ 277], 00:14:46.098 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 330], 00:14:46.098 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 644], 99.95th=[ 848], 00:14:46.098 | 99.99th=[ 2147] 00:14:46.098 bw ( KiB/s): min=13090, max=14632, per=30.88%, avg=13932.33, stdev=493.63, samples=6 00:14:46.098 iops : min= 3272, max= 3658, avg=3483.00, stdev=123.58, samples=6 00:14:46.098 lat (usec) : 250=32.15%, 500=67.72%, 750=0.06%, 1000=0.03% 00:14:46.098 lat (msec) : 2=0.01%, 4=0.03% 00:14:46.098 cpu : usr=1.29%, sys=4.56%, ctx=11619, majf=0, minf=1 00:14:46.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.098 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.098 issued rwts: total=11609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.098 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66607: Thu Dec 5 10:58:13 2024 00:14:46.098 read: IOPS=3655, BW=14.3MiB/s (15.0MB/s)(50.2MiB/3519msec) 00:14:46.098 slat (usec): min=7, max=11740, avg=16.99, stdev=201.48 00:14:46.098 clat (usec): min=109, max=3616, avg=255.36, stdev=80.73 00:14:46.098 lat (usec): min=116, max=11971, avg=272.36, stdev=217.21 00:14:46.098 clat percentiles (usec): 00:14:46.098 | 1.00th=[ 135], 5.00th=[ 151], 10.00th=[ 165], 20.00th=[ 192], 00:14:46.098 | 30.00th=[ 233], 40.00th=[ 253], 50.00th=[ 265], 60.00th=[ 277], 00:14:46.098 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 330], 00:14:46.098 | 99.00th=[ 367], 99.50th=[ 412], 99.90th=[ 1156], 99.95th=[ 1385], 00:14:46.098 | 99.99th=[ 2474] 00:14:46.098 bw ( KiB/s): min=13496, max=14152, per=30.71%, avg=13856.00, stdev=236.78, samples=6 00:14:46.098 iops : min= 3374, max= 3538, avg=3464.00, stdev=59.19, samples=6 00:14:46.098 lat (usec) : 250=38.01%, 500=61.64%, 750=0.14%, 1000=0.09% 00:14:46.098 lat (msec) : 2=0.08%, 4=0.03% 00:14:46.098 cpu : usr=1.14%, sys=4.41%, ctx=12873, majf=0, minf=2 00:14:46.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.098 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.098 issued rwts: total=12865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.098 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.098 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66609: Thu Dec 5 10:58:13 2024 00:14:46.098 read: IOPS=2677, BW=10.5MiB/s (11.0MB/s)(32.2MiB/3075msec) 00:14:46.098 slat (usec): min=7, max=18948, avg=27.33, stdev=235.17 00:14:46.098 clat (usec): min=150, max=1959, avg=343.22, stdev=65.57 00:14:46.098 lat (usec): min=158, max=19191, avg=370.55, stdev=243.56 00:14:46.098 clat percentiles (usec): 00:14:46.098 | 1.00th=[ 190], 5.00th=[ 208], 10.00th=[ 229], 20.00th=[ 318], 00:14:46.098 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 367], 00:14:46.098 | 70.00th=[ 375], 80.00th=[ 388], 90.00th=[ 404], 95.00th=[ 416], 00:14:46.098 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 570], 99.95th=[ 963], 00:14:46.098 | 99.99th=[ 1958] 00:14:46.098 bw ( KiB/s): min=10024, max=10696, per=22.82%, avg=10297.60, stdev=322.61, samples=5 00:14:46.098 iops : min= 2506, max= 2674, avg=2574.40, stdev=80.65, samples=5 00:14:46.098 lat (usec) : 250=13.68%, 500=86.17%, 750=0.09%, 1000=0.02% 00:14:46.098 lat (msec) : 2=0.04% 00:14:46.098 cpu : usr=1.40%, sys=5.86%, ctx=8238, majf=0, minf=2 00:14:46.098 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.098 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.099 issued rwts: total=8234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.099 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.099 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66610: Thu Dec 5 10:58:13 2024 00:14:46.099 read: IOPS=2447, BW=9790KiB/s (10.0MB/s)(27.3MiB/2853msec) 00:14:46.099 slat (nsec): min=8742, max=94381, avg=27454.22, stdev=7494.51 00:14:46.099 clat (usec): min=220, max=5355, avg=377.82, stdev=108.03 00:14:46.099 lat (usec): min=248, max=5383, avg=405.27, stdev=108.62 00:14:46.099 clat percentiles (usec): 00:14:46.099 | 1.00th=[ 281], 5.00th=[ 314], 10.00th=[ 326], 20.00th=[ 338], 00:14:46.099 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 379], 00:14:46.099 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 424], 95.00th=[ 482], 00:14:46.099 | 99.00th=[ 553], 99.50th=[ 570], 99.90th=[ 1565], 99.95th=[ 2212], 00:14:46.099 | 99.99th=[ 5342] 00:14:46.099 bw ( KiB/s): min= 9544, max=10096, per=21.94%, avg=9896.00, stdev=224.93, samples=5 00:14:46.099 iops : min= 2386, max= 2524, avg=2474.00, stdev=56.23, samples=5 00:14:46.099 lat (usec) : 250=0.17%, 500=95.73%, 750=3.92%, 1000=0.03% 00:14:46.099 lat (msec) : 2=0.07%, 4=0.01%, 10=0.04% 00:14:46.099 cpu : usr=1.30%, sys=6.38%, ctx=6985, majf=0, minf=2 00:14:46.099 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.099 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.099 issued rwts: total=6984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.099 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:46.099 00:14:46.099 Run status group 0 (all jobs): 00:14:46.099 READ: bw=44.1MiB/s (46.2MB/s), 9790KiB/s-14.3MiB/s (10.0MB/s-15.0MB/s), io=155MiB (163MB), run=2853-3519msec 00:14:46.099 00:14:46.099 Disk stats (read/write): 00:14:46.099 nvme0n1: ios=10841/0, merge=0/0, ticks=2931/0, in_queue=2931, util=94.95% 00:14:46.099 nvme0n2: ios=12001/0, merge=0/0, ticks=3166/0, in_queue=3166, util=95.27% 00:14:46.099 nvme0n3: ios=7507/0, merge=0/0, ticks=2667/0, in_queue=2667, util=96.64% 00:14:46.099 nvme0n4: ios=6452/0, merge=0/0, ticks=2437/0, in_queue=2437, util=96.47% 00:14:46.099 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.099 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:46.357 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.357 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:46.357 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.357 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:46.616 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.616 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:46.874 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.874 10:58:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66562 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:47.133 nvmf hotplug test: fio failed as expected 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:47.133 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:47.392 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:47.392 rmmod nvme_tcp 00:14:47.392 rmmod nvme_fabrics 00:14:47.392 rmmod nvme_keyring 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 66180 ']' 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 66180 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66180 ']' 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66180 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66180 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.650 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.650 killing process with pid 66180 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66180' 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66180 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66180 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:47.651 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:14:47.972 00:14:47.972 real 0m19.591s 00:14:47.972 user 1m13.350s 00:14:47.972 sys 0m9.366s 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.972 10:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.972 ************************************ 00:14:47.972 END TEST nvmf_fio_target 00:14:47.972 ************************************ 00:14:47.972 10:58:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:47.972 10:58:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.972 10:58:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.972 10:58:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:47.972 ************************************ 00:14:47.972 START TEST nvmf_bdevio 00:14:47.972 ************************************ 00:14:47.972 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:48.347 * Looking for test storage... 00:14:48.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.347 --rc genhtml_branch_coverage=1 00:14:48.347 --rc genhtml_function_coverage=1 00:14:48.347 --rc genhtml_legend=1 00:14:48.347 --rc geninfo_all_blocks=1 00:14:48.347 --rc geninfo_unexecuted_blocks=1 00:14:48.347 00:14:48.347 ' 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.347 --rc genhtml_branch_coverage=1 00:14:48.347 --rc genhtml_function_coverage=1 00:14:48.347 --rc genhtml_legend=1 00:14:48.347 --rc geninfo_all_blocks=1 00:14:48.347 --rc geninfo_unexecuted_blocks=1 00:14:48.347 00:14:48.347 ' 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.347 --rc genhtml_branch_coverage=1 00:14:48.347 --rc genhtml_function_coverage=1 00:14:48.347 --rc genhtml_legend=1 00:14:48.347 --rc geninfo_all_blocks=1 00:14:48.347 --rc geninfo_unexecuted_blocks=1 00:14:48.347 00:14:48.347 ' 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:48.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.347 --rc genhtml_branch_coverage=1 00:14:48.347 --rc genhtml_function_coverage=1 00:14:48.347 --rc genhtml_legend=1 00:14:48.347 --rc geninfo_all_blocks=1 00:14:48.347 --rc geninfo_unexecuted_blocks=1 00:14:48.347 00:14:48.347 ' 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.347 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:48.348 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@280 -- # nvmf_veth_init 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@223 -- # create_target_ns 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # create_main_bridge 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@105 -- # delete_main_bridge 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator0 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:14:48.348 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target0 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0 up 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target0_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target0 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:14:48.349 10.0.0.1 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:14:48.349 10.0.0.2 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator0 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target0_br 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:14:48.349 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator1 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target1 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1 up 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target1_br 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target1 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772163 00:14:48.609 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:14:48.610 10.0.0.3 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772164 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:14:48.610 10.0.0.4 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator1 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target1_br 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 2 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:48.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:14:48.610 00:14:48.610 --- 10.0.0.1 ping statistics --- 00:14:48.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.610 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:48.610 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:48.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:14:48.871 00:14:48.871 --- 10.0.0.2 ping statistics --- 00:14:48.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.871 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:14:48.871 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:14:48.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:48.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:14:48.872 00:14:48.872 --- 10.0.0.3 ping statistics --- 00:14:48.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.872 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:14:48.872 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:48.872 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.103 ms 00:14:48.872 00:14:48.872 --- 10.0.0.4 ping statistics --- 00:14:48.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.872 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # return 0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:48.872 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=66923 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 66923 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 66923 ']' 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.873 10:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:49.132 [2024-12-05 10:58:16.043335] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:14:49.132 [2024-12-05 10:58:16.043405] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.132 [2024-12-05 10:58:16.199867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.132 [2024-12-05 10:58:16.256957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.132 [2024-12-05 10:58:16.257007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.132 [2024-12-05 10:58:16.257017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.132 [2024-12-05 10:58:16.257025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.132 [2024-12-05 10:58:16.257032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.132 [2024-12-05 10:58:16.258253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:49.132 [2024-12-05 10:58:16.258370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:49.132 [2024-12-05 10:58:16.258484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:49.132 [2024-12-05 10:58:16.258488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.392 [2024-12-05 10:58:16.302892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.960 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:49.960 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:49.960 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:49.960 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:49.960 10:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:49.960 [2024-12-05 10:58:17.025573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:49.960 Malloc0 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:49.960 [2024-12-05 10:58:17.094489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:49.960 { 00:14:49.960 "params": { 00:14:49.960 "name": "Nvme$subsystem", 00:14:49.960 "trtype": "$TEST_TRANSPORT", 00:14:49.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.960 "adrfam": "ipv4", 00:14:49.960 "trsvcid": "$NVMF_PORT", 00:14:49.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.960 "hdgst": ${hdgst:-false}, 00:14:49.960 "ddgst": ${ddgst:-false} 00:14:49.960 }, 00:14:49.960 "method": "bdev_nvme_attach_controller" 00:14:49.960 } 00:14:49.960 EOF 00:14:49.960 )") 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:14:49.960 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:14:50.220 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:14:50.220 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:50.220 "params": { 00:14:50.220 "name": "Nvme1", 00:14:50.220 "trtype": "tcp", 00:14:50.220 "traddr": "10.0.0.2", 00:14:50.220 "adrfam": "ipv4", 00:14:50.220 "trsvcid": "4420", 00:14:50.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:50.220 "hdgst": false, 00:14:50.220 "ddgst": false 00:14:50.220 }, 00:14:50.220 "method": "bdev_nvme_attach_controller" 00:14:50.220 }' 00:14:50.220 [2024-12-05 10:58:17.151050] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:14:50.220 [2024-12-05 10:58:17.151259] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66959 ] 00:14:50.220 [2024-12-05 10:58:17.305845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.220 [2024-12-05 10:58:17.363762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.220 [2024-12-05 10:58:17.363820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.220 [2024-12-05 10:58:17.363817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.479 [2024-12-05 10:58:17.420854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.479 I/O targets: 00:14:50.479 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:50.479 00:14:50.479 00:14:50.479 CUnit - A unit testing framework for C - Version 2.1-3 00:14:50.479 http://cunit.sourceforge.net/ 00:14:50.479 00:14:50.479 00:14:50.479 Suite: bdevio tests on: Nvme1n1 00:14:50.479 Test: blockdev write read block ...passed 00:14:50.479 Test: blockdev write zeroes read block ...passed 00:14:50.479 Test: blockdev write zeroes read no split ...passed 00:14:50.479 Test: blockdev write zeroes read split ...passed 00:14:50.479 Test: blockdev write zeroes read split partial ...passed 00:14:50.479 Test: blockdev reset ...[2024-12-05 10:58:17.566482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:50.479 [2024-12-05 10:58:17.566742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18bf190 (9): Bad file descriptor 00:14:50.479 [2024-12-05 10:58:17.586144] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:50.479 passed 00:14:50.479 Test: blockdev write read 8 blocks ...passed 00:14:50.479 Test: blockdev write read size > 128k ...passed 00:14:50.479 Test: blockdev write read invalid size ...passed 00:14:50.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:50.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:50.479 Test: blockdev write read max offset ...passed 00:14:50.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:50.479 Test: blockdev writev readv 8 blocks ...passed 00:14:50.479 Test: blockdev writev readv 30 x 1block ...passed 00:14:50.479 Test: blockdev writev readv block ...passed 00:14:50.479 Test: blockdev writev readv size > 128k ...passed 00:14:50.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:50.479 Test: blockdev comparev and writev ...[2024-12-05 10:58:17.593706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.479 [2024-12-05 10:58:17.593862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.593896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.479 [2024-12-05 10:58:17.593907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.594192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.479 [2024-12-05 10:58:17.594205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.594219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.479 [2024-12-05 10:58:17.594229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.594470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.479 [2024-12-05 10:58:17.594484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.594498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.479 [2024-12-05 10:58:17.594508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.594744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.479 [2024-12-05 10:58:17.594756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.594771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:50.479 [2024-12-05 10:58:17.594780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:50.479 passed 00:14:50.479 Test: blockdev nvme passthru rw ...passed 00:14:50.479 Test: blockdev nvme passthru vendor specific ...[2024-12-05 10:58:17.595583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:50.479 [2024-12-05 10:58:17.595598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.595683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:50.479 [2024-12-05 10:58:17.595695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.595776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:50.479 [2024-12-05 10:58:17.595787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:50.479 [2024-12-05 10:58:17.595866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:50.479 [2024-12-05 10:58:17.595877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:50.479 passed 00:14:50.479 Test: blockdev nvme admin passthru ...passed 00:14:50.479 Test: blockdev copy ...passed 00:14:50.479 00:14:50.479 Run Summary: Type Total Ran Passed Failed Inactive 00:14:50.479 suites 1 1 n/a 0 0 00:14:50.479 tests 23 23 23 0 0 00:14:50.479 asserts 152 152 152 0 n/a 00:14:50.479 00:14:50.479 Elapsed time = 0.143 seconds 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:50.738 rmmod nvme_tcp 00:14:50.738 rmmod nvme_fabrics 00:14:50.738 rmmod nvme_keyring 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 66923 ']' 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 66923 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 66923 ']' 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 66923 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.738 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66923 00:14:50.997 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:50.997 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:50.997 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66923' 00:14:50.997 killing process with pid 66923 00:14:50.997 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 66923 00:14:50.997 10:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 66923 00:14:50.997 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:50.997 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:14:50.997 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:14:50.997 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:50.997 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:50.997 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:50.997 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:14:51.257 00:14:51.257 real 0m3.297s 00:14:51.257 user 0m9.054s 00:14:51.257 sys 0m1.136s 00:14:51.257 ************************************ 00:14:51.257 END TEST nvmf_bdevio 00:14:51.257 ************************************ 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:51.257 00:14:51.257 real 2m36.602s 00:14:51.257 user 6m37.516s 00:14:51.257 sys 1m2.042s 00:14:51.257 ************************************ 00:14:51.257 END TEST nvmf_target_core 00:14:51.257 ************************************ 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.257 10:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:51.517 10:58:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:51.517 10:58:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.517 10:58:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.517 10:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.517 ************************************ 00:14:51.517 START TEST nvmf_target_extra 00:14:51.517 ************************************ 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:51.517 * Looking for test storage... 00:14:51.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.517 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.778 --rc genhtml_branch_coverage=1 00:14:51.778 --rc genhtml_function_coverage=1 00:14:51.778 --rc genhtml_legend=1 00:14:51.778 --rc geninfo_all_blocks=1 00:14:51.778 --rc geninfo_unexecuted_blocks=1 00:14:51.778 00:14:51.778 ' 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.778 --rc genhtml_branch_coverage=1 00:14:51.778 --rc genhtml_function_coverage=1 00:14:51.778 --rc genhtml_legend=1 00:14:51.778 --rc geninfo_all_blocks=1 00:14:51.778 --rc geninfo_unexecuted_blocks=1 00:14:51.778 00:14:51.778 ' 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.778 --rc genhtml_branch_coverage=1 00:14:51.778 --rc genhtml_function_coverage=1 00:14:51.778 --rc genhtml_legend=1 00:14:51.778 --rc geninfo_all_blocks=1 00:14:51.778 --rc geninfo_unexecuted_blocks=1 00:14:51.778 00:14:51.778 ' 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:51.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.778 --rc genhtml_branch_coverage=1 00:14:51.778 --rc genhtml_function_coverage=1 00:14:51.778 --rc genhtml_legend=1 00:14:51.778 --rc geninfo_all_blocks=1 00:14:51.778 --rc geninfo_unexecuted_blocks=1 00:14:51.778 00:14:51.778 ' 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.778 10:58:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:51.779 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.779 ************************************ 00:14:51.779 START TEST nvmf_auth_target 00:14:51.779 ************************************ 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:51.779 * Looking for test storage... 00:14:51.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:51.779 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:52.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.040 --rc genhtml_branch_coverage=1 00:14:52.040 --rc genhtml_function_coverage=1 00:14:52.040 --rc genhtml_legend=1 00:14:52.040 --rc geninfo_all_blocks=1 00:14:52.040 --rc geninfo_unexecuted_blocks=1 00:14:52.040 00:14:52.040 ' 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:52.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.040 --rc genhtml_branch_coverage=1 00:14:52.040 --rc genhtml_function_coverage=1 00:14:52.040 --rc genhtml_legend=1 00:14:52.040 --rc geninfo_all_blocks=1 00:14:52.040 --rc geninfo_unexecuted_blocks=1 00:14:52.040 00:14:52.040 ' 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:52.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.040 --rc genhtml_branch_coverage=1 00:14:52.040 --rc genhtml_function_coverage=1 00:14:52.040 --rc genhtml_legend=1 00:14:52.040 --rc geninfo_all_blocks=1 00:14:52.040 --rc geninfo_unexecuted_blocks=1 00:14:52.040 00:14:52.040 ' 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:52.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.040 --rc genhtml_branch_coverage=1 00:14:52.040 --rc genhtml_function_coverage=1 00:14:52.040 --rc genhtml_legend=1 00:14:52.040 --rc geninfo_all_blocks=1 00:14:52.040 --rc geninfo_unexecuted_blocks=1 00:14:52.040 00:14:52.040 ' 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.040 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:52.041 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:52.041 10:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@223 -- # create_target_ns 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # return 0 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:14:52.041 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up target0 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:14:52.042 10.0.0.1 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:52.042 10.0.0.2 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:14:52.042 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up target1 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772163 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:14:52.303 10.0.0.3 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772164 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:14:52.303 10.0.0.4 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:14:52.303 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:52.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:14:52.304 00:14:52.304 --- 10.0.0.1 ping statistics --- 00:14:52.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.304 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target0 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:52.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:14:52.304 00:14:52.304 --- 10.0.0.2 ping statistics --- 00:14:52.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.304 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:52.304 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:14:52.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:14:52.564 00:14:52.564 --- 10.0.0.3 ping statistics --- 00:14:52.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.564 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:52.564 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:14:52.565 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:52.565 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:14:52.565 00:14:52.565 --- 10.0.0.4 ping statistics --- 00:14:52.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.565 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # return 0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target0 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target1 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:52.565 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=67250 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 67250 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67250 ']' 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.566 10:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67282 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:53.502 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:53.503 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:14:53.503 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:14:53.503 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:53.503 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=8079de54a7076ecd9a2284c34eaa9e9ded67211e24fd8bf1 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.vOg 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 8079de54a7076ecd9a2284c34eaa9e9ded67211e24fd8bf1 0 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 8079de54a7076ecd9a2284c34eaa9e9ded67211e24fd8bf1 0 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=8079de54a7076ecd9a2284c34eaa9e9ded67211e24fd8bf1 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.vOg 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.vOg 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.vOg 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=36412e5c35c5935b276e6f51f92b0064c77f6deed70da5e5f574bceaed12a585 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.VSF 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 36412e5c35c5935b276e6f51f92b0064c77f6deed70da5e5f574bceaed12a585 3 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 36412e5c35c5935b276e6f51f92b0064c77f6deed70da5e5f574bceaed12a585 3 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=36412e5c35c5935b276e6f51f92b0064c77f6deed70da5e5f574bceaed12a585 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.VSF 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.VSF 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.VSF 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=8d48a0975bc9ab623068236e5c5080c4 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.kOS 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 8d48a0975bc9ab623068236e5c5080c4 1 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 8d48a0975bc9ab623068236e5c5080c4 1 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=8d48a0975bc9ab623068236e5c5080c4 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.kOS 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.kOS 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kOS 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:53.760 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=a32a0c32cbc4da8b55b5b0380b7fbf967485bfd0e90df917 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.iZO 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key a32a0c32cbc4da8b55b5b0380b7fbf967485bfd0e90df917 2 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 a32a0c32cbc4da8b55b5b0380b7fbf967485bfd0e90df917 2 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=a32a0c32cbc4da8b55b5b0380b7fbf967485bfd0e90df917 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:14:53.761 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.iZO 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.iZO 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.iZO 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=7109e0b2b328749175d9b635648c476c3e2e0771166dc254 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.mLM 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 7109e0b2b328749175d9b635648c476c3e2e0771166dc254 2 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 7109e0b2b328749175d9b635648c476c3e2e0771166dc254 2 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=7109e0b2b328749175d9b635648c476c3e2e0771166dc254 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:14:54.017 10:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.mLM 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.mLM 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.mLM 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=fee1ea094bc935513266ca7a9b0bec1e 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.rKO 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key fee1ea094bc935513266ca7a9b0bec1e 1 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 fee1ea094bc935513266ca7a9b0bec1e 1 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=fee1ea094bc935513266ca7a9b0bec1e 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.rKO 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.rKO 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.rKO 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=1cb7399831be432673e76f941e744fb1aaff8672e0ea0b19a7acf2186a4b245b 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.tME 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 1cb7399831be432673e76f941e744fb1aaff8672e0ea0b19a7acf2186a4b245b 3 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 1cb7399831be432673e76f941e744fb1aaff8672e0ea0b19a7acf2186a4b245b 3 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=1cb7399831be432673e76f941e744fb1aaff8672e0ea0b19a7acf2186a4b245b 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:14:54.017 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.tME 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.tME 00:14:54.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.tME 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67250 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67250 ']' 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67282 /var/tmp/host.sock 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67282 ']' 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:54.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.273 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.529 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vOg 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.vOg 00:14:54.530 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.vOg 00:14:54.786 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.VSF ]] 00:14:54.786 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VSF 00:14:54.786 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.786 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.786 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.786 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VSF 00:14:54.786 10:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VSF 00:14:55.043 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:55.043 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kOS 00:14:55.043 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.043 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.043 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.043 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kOS 00:14:55.043 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kOS 00:14:55.301 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.iZO ]] 00:14:55.301 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iZO 00:14:55.301 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.301 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.301 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.301 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iZO 00:14:55.301 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iZO 00:14:55.558 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:55.558 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mLM 00:14:55.558 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.558 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.558 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.558 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.mLM 00:14:55.558 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.mLM 00:14:55.816 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.rKO ]] 00:14:55.816 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rKO 00:14:55.816 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.816 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.816 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.816 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rKO 00:14:55.816 10:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rKO 00:14:56.074 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:56.074 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tME 00:14:56.074 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.074 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.074 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.074 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tME 00:14:56.074 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tME 00:14:56.332 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:56.332 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:56.332 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.332 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.332 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:56.332 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.590 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.848 00:14:56.848 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.848 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.848 10:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.107 { 00:14:57.107 "cntlid": 1, 00:14:57.107 "qid": 0, 00:14:57.107 "state": "enabled", 00:14:57.107 "thread": "nvmf_tgt_poll_group_000", 00:14:57.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:14:57.107 "listen_address": { 00:14:57.107 "trtype": "TCP", 00:14:57.107 "adrfam": "IPv4", 00:14:57.107 "traddr": "10.0.0.2", 00:14:57.107 "trsvcid": "4420" 00:14:57.107 }, 00:14:57.107 "peer_address": { 00:14:57.107 "trtype": "TCP", 00:14:57.107 "adrfam": "IPv4", 00:14:57.107 "traddr": "10.0.0.1", 00:14:57.107 "trsvcid": "44052" 00:14:57.107 }, 00:14:57.107 "auth": { 00:14:57.107 "state": "completed", 00:14:57.107 "digest": "sha256", 00:14:57.107 "dhgroup": "null" 00:14:57.107 } 00:14:57.107 } 00:14:57.107 ]' 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:57.107 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.366 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.366 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.366 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.366 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:14:57.367 10:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.563 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.563 00:15:01.830 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.830 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.830 10:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.091 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.091 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.091 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.091 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.091 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.091 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.091 { 00:15:02.091 "cntlid": 3, 00:15:02.091 "qid": 0, 00:15:02.091 "state": "enabled", 00:15:02.091 "thread": "nvmf_tgt_poll_group_000", 00:15:02.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:02.091 "listen_address": { 00:15:02.091 "trtype": "TCP", 00:15:02.091 "adrfam": "IPv4", 00:15:02.091 "traddr": "10.0.0.2", 00:15:02.091 "trsvcid": "4420" 00:15:02.091 }, 00:15:02.091 "peer_address": { 00:15:02.091 "trtype": "TCP", 00:15:02.091 "adrfam": "IPv4", 00:15:02.092 "traddr": "10.0.0.1", 00:15:02.092 "trsvcid": "44072" 00:15:02.092 }, 00:15:02.092 "auth": { 00:15:02.092 "state": "completed", 00:15:02.092 "digest": "sha256", 00:15:02.092 "dhgroup": "null" 00:15:02.092 } 00:15:02.092 } 00:15:02.092 ]' 00:15:02.092 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.092 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.092 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.092 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:02.092 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.092 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.092 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.092 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.351 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:02.351 10:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.289 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.549 00:15:03.549 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.549 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.549 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.809 { 00:15:03.809 "cntlid": 5, 00:15:03.809 "qid": 0, 00:15:03.809 "state": "enabled", 00:15:03.809 "thread": "nvmf_tgt_poll_group_000", 00:15:03.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:03.809 "listen_address": { 00:15:03.809 "trtype": "TCP", 00:15:03.809 "adrfam": "IPv4", 00:15:03.809 "traddr": "10.0.0.2", 00:15:03.809 "trsvcid": "4420" 00:15:03.809 }, 00:15:03.809 "peer_address": { 00:15:03.809 "trtype": "TCP", 00:15:03.809 "adrfam": "IPv4", 00:15:03.809 "traddr": "10.0.0.1", 00:15:03.809 "trsvcid": "44104" 00:15:03.809 }, 00:15:03.809 "auth": { 00:15:03.809 "state": "completed", 00:15:03.809 "digest": "sha256", 00:15:03.809 "dhgroup": "null" 00:15:03.809 } 00:15:03.809 } 00:15:03.809 ]' 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.809 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.069 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:04.069 10:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.069 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.069 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.069 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.380 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:04.380 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:04.957 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.957 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:04.957 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.957 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.957 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.957 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.957 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:04.957 10:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.217 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.476 00:15:05.476 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.476 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.476 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.735 { 00:15:05.735 "cntlid": 7, 00:15:05.735 "qid": 0, 00:15:05.735 "state": "enabled", 00:15:05.735 "thread": "nvmf_tgt_poll_group_000", 00:15:05.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:05.735 "listen_address": { 00:15:05.735 "trtype": "TCP", 00:15:05.735 "adrfam": "IPv4", 00:15:05.735 "traddr": "10.0.0.2", 00:15:05.735 "trsvcid": "4420" 00:15:05.735 }, 00:15:05.735 "peer_address": { 00:15:05.735 "trtype": "TCP", 00:15:05.735 "adrfam": "IPv4", 00:15:05.735 "traddr": "10.0.0.1", 00:15:05.735 "trsvcid": "49690" 00:15:05.735 }, 00:15:05.735 "auth": { 00:15:05.735 "state": "completed", 00:15:05.735 "digest": "sha256", 00:15:05.735 "dhgroup": "null" 00:15:05.735 } 00:15:05.735 } 00:15:05.735 ]' 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.735 10:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.994 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:05.994 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:06.930 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.930 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:06.930 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.930 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.930 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.930 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.930 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.931 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:06.931 10:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.931 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.189 00:15:07.447 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.447 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.447 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.705 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.705 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.705 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.705 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.705 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.705 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.705 { 00:15:07.705 "cntlid": 9, 00:15:07.705 "qid": 0, 00:15:07.705 "state": "enabled", 00:15:07.705 "thread": "nvmf_tgt_poll_group_000", 00:15:07.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:07.705 "listen_address": { 00:15:07.705 "trtype": "TCP", 00:15:07.705 "adrfam": "IPv4", 00:15:07.705 "traddr": "10.0.0.2", 00:15:07.705 "trsvcid": "4420" 00:15:07.705 }, 00:15:07.705 "peer_address": { 00:15:07.705 "trtype": "TCP", 00:15:07.705 "adrfam": "IPv4", 00:15:07.705 "traddr": "10.0.0.1", 00:15:07.705 "trsvcid": "49704" 00:15:07.705 }, 00:15:07.705 "auth": { 00:15:07.705 "state": "completed", 00:15:07.705 "digest": "sha256", 00:15:07.705 "dhgroup": "ffdhe2048" 00:15:07.705 } 00:15:07.705 } 00:15:07.705 ]' 00:15:07.705 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.705 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.706 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.706 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.706 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.706 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.706 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.706 10:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.966 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:07.966 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:08.532 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.532 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:08.532 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.532 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.532 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.532 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.532 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.532 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.790 10:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.358 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.358 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.358 { 00:15:09.358 "cntlid": 11, 00:15:09.358 "qid": 0, 00:15:09.358 "state": "enabled", 00:15:09.358 "thread": "nvmf_tgt_poll_group_000", 00:15:09.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:09.358 "listen_address": { 00:15:09.358 "trtype": "TCP", 00:15:09.358 "adrfam": "IPv4", 00:15:09.358 "traddr": "10.0.0.2", 00:15:09.358 "trsvcid": "4420" 00:15:09.358 }, 00:15:09.358 "peer_address": { 00:15:09.358 "trtype": "TCP", 00:15:09.358 "adrfam": "IPv4", 00:15:09.358 "traddr": "10.0.0.1", 00:15:09.358 "trsvcid": "49722" 00:15:09.358 }, 00:15:09.358 "auth": { 00:15:09.359 "state": "completed", 00:15:09.359 "digest": "sha256", 00:15:09.359 "dhgroup": "ffdhe2048" 00:15:09.359 } 00:15:09.359 } 00:15:09.359 ]' 00:15:09.359 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.682 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.682 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.682 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.682 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.682 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.682 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.682 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.938 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:09.938 10:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:10.509 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.509 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:10.509 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.509 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.509 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.509 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.509 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.509 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.768 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.769 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.769 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.027 00:15:11.027 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.027 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.027 10:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.286 { 00:15:11.286 "cntlid": 13, 00:15:11.286 "qid": 0, 00:15:11.286 "state": "enabled", 00:15:11.286 "thread": "nvmf_tgt_poll_group_000", 00:15:11.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:11.286 "listen_address": { 00:15:11.286 "trtype": "TCP", 00:15:11.286 "adrfam": "IPv4", 00:15:11.286 "traddr": "10.0.0.2", 00:15:11.286 "trsvcid": "4420" 00:15:11.286 }, 00:15:11.286 "peer_address": { 00:15:11.286 "trtype": "TCP", 00:15:11.286 "adrfam": "IPv4", 00:15:11.286 "traddr": "10.0.0.1", 00:15:11.286 "trsvcid": "49762" 00:15:11.286 }, 00:15:11.286 "auth": { 00:15:11.286 "state": "completed", 00:15:11.286 "digest": "sha256", 00:15:11.286 "dhgroup": "ffdhe2048" 00:15:11.286 } 00:15:11.286 } 00:15:11.286 ]' 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.286 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.544 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:11.544 10:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:12.109 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.109 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:12.109 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.109 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.109 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.109 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.109 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.109 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.366 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.625 00:15:12.626 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.626 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.626 10:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.191 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.191 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.191 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.191 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.191 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.191 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.191 { 00:15:13.191 "cntlid": 15, 00:15:13.191 "qid": 0, 00:15:13.191 "state": "enabled", 00:15:13.191 "thread": "nvmf_tgt_poll_group_000", 00:15:13.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:13.191 "listen_address": { 00:15:13.191 "trtype": "TCP", 00:15:13.191 "adrfam": "IPv4", 00:15:13.191 "traddr": "10.0.0.2", 00:15:13.191 "trsvcid": "4420" 00:15:13.191 }, 00:15:13.191 "peer_address": { 00:15:13.191 "trtype": "TCP", 00:15:13.191 "adrfam": "IPv4", 00:15:13.191 "traddr": "10.0.0.1", 00:15:13.191 "trsvcid": "49774" 00:15:13.191 }, 00:15:13.191 "auth": { 00:15:13.192 "state": "completed", 00:15:13.192 "digest": "sha256", 00:15:13.192 "dhgroup": "ffdhe2048" 00:15:13.192 } 00:15:13.192 } 00:15:13.192 ]' 00:15:13.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.192 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.466 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:13.466 10:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:14.031 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.288 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.289 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.289 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.546 00:15:14.546 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.546 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.546 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.803 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.803 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.803 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.803 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.803 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.803 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.803 { 00:15:14.803 "cntlid": 17, 00:15:14.803 "qid": 0, 00:15:14.803 "state": "enabled", 00:15:14.803 "thread": "nvmf_tgt_poll_group_000", 00:15:14.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:14.803 "listen_address": { 00:15:14.803 "trtype": "TCP", 00:15:14.803 "adrfam": "IPv4", 00:15:14.803 "traddr": "10.0.0.2", 00:15:14.803 "trsvcid": "4420" 00:15:14.803 }, 00:15:14.803 "peer_address": { 00:15:14.803 "trtype": "TCP", 00:15:14.803 "adrfam": "IPv4", 00:15:14.803 "traddr": "10.0.0.1", 00:15:14.803 "trsvcid": "34674" 00:15:14.803 }, 00:15:14.803 "auth": { 00:15:14.803 "state": "completed", 00:15:14.803 "digest": "sha256", 00:15:14.803 "dhgroup": "ffdhe3072" 00:15:14.803 } 00:15:14.803 } 00:15:14.803 ]' 00:15:14.803 10:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.079 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.079 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.079 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.079 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.079 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.079 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.079 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.337 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:15.337 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:15.957 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.957 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:15.957 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.957 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.957 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.957 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.957 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:15.957 10:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.228 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.486 00:15:16.486 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.486 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.486 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.744 { 00:15:16.744 "cntlid": 19, 00:15:16.744 "qid": 0, 00:15:16.744 "state": "enabled", 00:15:16.744 "thread": "nvmf_tgt_poll_group_000", 00:15:16.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:16.744 "listen_address": { 00:15:16.744 "trtype": "TCP", 00:15:16.744 "adrfam": "IPv4", 00:15:16.744 "traddr": "10.0.0.2", 00:15:16.744 "trsvcid": "4420" 00:15:16.744 }, 00:15:16.744 "peer_address": { 00:15:16.744 "trtype": "TCP", 00:15:16.744 "adrfam": "IPv4", 00:15:16.744 "traddr": "10.0.0.1", 00:15:16.744 "trsvcid": "34702" 00:15:16.744 }, 00:15:16.744 "auth": { 00:15:16.744 "state": "completed", 00:15:16.744 "digest": "sha256", 00:15:16.744 "dhgroup": "ffdhe3072" 00:15:16.744 } 00:15:16.744 } 00:15:16.744 ]' 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.744 10:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.003 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:17.003 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.939 10:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.939 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.939 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.939 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.939 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.197 00:15:18.197 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.197 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.197 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.763 { 00:15:18.763 "cntlid": 21, 00:15:18.763 "qid": 0, 00:15:18.763 "state": "enabled", 00:15:18.763 "thread": "nvmf_tgt_poll_group_000", 00:15:18.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:18.763 "listen_address": { 00:15:18.763 "trtype": "TCP", 00:15:18.763 "adrfam": "IPv4", 00:15:18.763 "traddr": "10.0.0.2", 00:15:18.763 "trsvcid": "4420" 00:15:18.763 }, 00:15:18.763 "peer_address": { 00:15:18.763 "trtype": "TCP", 00:15:18.763 "adrfam": "IPv4", 00:15:18.763 "traddr": "10.0.0.1", 00:15:18.763 "trsvcid": "34748" 00:15:18.763 }, 00:15:18.763 "auth": { 00:15:18.763 "state": "completed", 00:15:18.763 "digest": "sha256", 00:15:18.763 "dhgroup": "ffdhe3072" 00:15:18.763 } 00:15:18.763 } 00:15:18.763 ]' 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.763 10:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.021 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:19.021 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:19.587 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.587 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:19.587 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.587 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.587 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.587 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.587 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.587 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.846 10:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.105 00:15:20.105 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.105 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.105 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.364 { 00:15:20.364 "cntlid": 23, 00:15:20.364 "qid": 0, 00:15:20.364 "state": "enabled", 00:15:20.364 "thread": "nvmf_tgt_poll_group_000", 00:15:20.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:20.364 "listen_address": { 00:15:20.364 "trtype": "TCP", 00:15:20.364 "adrfam": "IPv4", 00:15:20.364 "traddr": "10.0.0.2", 00:15:20.364 "trsvcid": "4420" 00:15:20.364 }, 00:15:20.364 "peer_address": { 00:15:20.364 "trtype": "TCP", 00:15:20.364 "adrfam": "IPv4", 00:15:20.364 "traddr": "10.0.0.1", 00:15:20.364 "trsvcid": "34764" 00:15:20.364 }, 00:15:20.364 "auth": { 00:15:20.364 "state": "completed", 00:15:20.364 "digest": "sha256", 00:15:20.364 "dhgroup": "ffdhe3072" 00:15:20.364 } 00:15:20.364 } 00:15:20.364 ]' 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.364 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.622 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.622 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.622 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.622 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.622 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.881 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:20.881 10:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.448 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.707 10:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.965 00:15:21.965 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.965 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.965 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.231 { 00:15:22.231 "cntlid": 25, 00:15:22.231 "qid": 0, 00:15:22.231 "state": "enabled", 00:15:22.231 "thread": "nvmf_tgt_poll_group_000", 00:15:22.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:22.231 "listen_address": { 00:15:22.231 "trtype": "TCP", 00:15:22.231 "adrfam": "IPv4", 00:15:22.231 "traddr": "10.0.0.2", 00:15:22.231 "trsvcid": "4420" 00:15:22.231 }, 00:15:22.231 "peer_address": { 00:15:22.231 "trtype": "TCP", 00:15:22.231 "adrfam": "IPv4", 00:15:22.231 "traddr": "10.0.0.1", 00:15:22.231 "trsvcid": "34796" 00:15:22.231 }, 00:15:22.231 "auth": { 00:15:22.231 "state": "completed", 00:15:22.231 "digest": "sha256", 00:15:22.231 "dhgroup": "ffdhe4096" 00:15:22.231 } 00:15:22.231 } 00:15:22.231 ]' 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.231 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.542 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.542 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.542 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.542 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:22.542 10:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:23.110 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.110 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:23.110 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.110 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.110 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.110 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.110 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.110 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.368 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.933 00:15:23.933 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.933 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.933 10:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.191 { 00:15:24.191 "cntlid": 27, 00:15:24.191 "qid": 0, 00:15:24.191 "state": "enabled", 00:15:24.191 "thread": "nvmf_tgt_poll_group_000", 00:15:24.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:24.191 "listen_address": { 00:15:24.191 "trtype": "TCP", 00:15:24.191 "adrfam": "IPv4", 00:15:24.191 "traddr": "10.0.0.2", 00:15:24.191 "trsvcid": "4420" 00:15:24.191 }, 00:15:24.191 "peer_address": { 00:15:24.191 "trtype": "TCP", 00:15:24.191 "adrfam": "IPv4", 00:15:24.191 "traddr": "10.0.0.1", 00:15:24.191 "trsvcid": "34818" 00:15:24.191 }, 00:15:24.191 "auth": { 00:15:24.191 "state": "completed", 00:15:24.191 "digest": "sha256", 00:15:24.191 "dhgroup": "ffdhe4096" 00:15:24.191 } 00:15:24.191 } 00:15:24.191 ]' 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.191 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.450 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:24.450 10:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:25.018 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.276 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:25.276 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.276 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.276 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.276 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.276 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.276 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.534 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.793 00:15:25.793 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.793 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.793 10:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.051 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.051 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.051 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.051 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.051 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.051 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.051 { 00:15:26.051 "cntlid": 29, 00:15:26.051 "qid": 0, 00:15:26.051 "state": "enabled", 00:15:26.051 "thread": "nvmf_tgt_poll_group_000", 00:15:26.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:26.051 "listen_address": { 00:15:26.051 "trtype": "TCP", 00:15:26.051 "adrfam": "IPv4", 00:15:26.051 "traddr": "10.0.0.2", 00:15:26.051 "trsvcid": "4420" 00:15:26.051 }, 00:15:26.051 "peer_address": { 00:15:26.051 "trtype": "TCP", 00:15:26.051 "adrfam": "IPv4", 00:15:26.051 "traddr": "10.0.0.1", 00:15:26.051 "trsvcid": "33786" 00:15:26.051 }, 00:15:26.051 "auth": { 00:15:26.051 "state": "completed", 00:15:26.051 "digest": "sha256", 00:15:26.051 "dhgroup": "ffdhe4096" 00:15:26.051 } 00:15:26.051 } 00:15:26.051 ]' 00:15:26.051 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.051 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.052 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.052 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.052 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.052 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.052 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.052 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.310 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:26.311 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:26.879 10:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.879 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:26.879 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.879 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.879 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.879 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.879 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.879 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.138 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.704 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.704 { 00:15:27.704 "cntlid": 31, 00:15:27.704 "qid": 0, 00:15:27.704 "state": "enabled", 00:15:27.704 "thread": "nvmf_tgt_poll_group_000", 00:15:27.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:27.704 "listen_address": { 00:15:27.704 "trtype": "TCP", 00:15:27.704 "adrfam": "IPv4", 00:15:27.704 "traddr": "10.0.0.2", 00:15:27.704 "trsvcid": "4420" 00:15:27.704 }, 00:15:27.704 "peer_address": { 00:15:27.704 "trtype": "TCP", 00:15:27.704 "adrfam": "IPv4", 00:15:27.704 "traddr": "10.0.0.1", 00:15:27.704 "trsvcid": "33810" 00:15:27.704 }, 00:15:27.704 "auth": { 00:15:27.704 "state": "completed", 00:15:27.704 "digest": "sha256", 00:15:27.704 "dhgroup": "ffdhe4096" 00:15:27.704 } 00:15:27.704 } 00:15:27.704 ]' 00:15:27.704 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.962 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.962 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.962 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.962 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.962 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.962 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.962 10:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.220 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:28.220 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.786 10:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.103 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.716 00:15:29.716 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.716 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.716 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.974 { 00:15:29.974 "cntlid": 33, 00:15:29.974 "qid": 0, 00:15:29.974 "state": "enabled", 00:15:29.974 "thread": "nvmf_tgt_poll_group_000", 00:15:29.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:29.974 "listen_address": { 00:15:29.974 "trtype": "TCP", 00:15:29.974 "adrfam": "IPv4", 00:15:29.974 "traddr": "10.0.0.2", 00:15:29.974 "trsvcid": "4420" 00:15:29.974 }, 00:15:29.974 "peer_address": { 00:15:29.974 "trtype": "TCP", 00:15:29.974 "adrfam": "IPv4", 00:15:29.974 "traddr": "10.0.0.1", 00:15:29.974 "trsvcid": "33830" 00:15:29.974 }, 00:15:29.974 "auth": { 00:15:29.974 "state": "completed", 00:15:29.974 "digest": "sha256", 00:15:29.974 "dhgroup": "ffdhe6144" 00:15:29.974 } 00:15:29.974 } 00:15:29.974 ]' 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.974 10:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.974 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.974 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.974 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.974 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.975 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.233 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:30.233 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:30.798 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.798 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:30.798 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.798 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.055 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.055 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.055 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.055 10:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.055 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.621 00:15:31.621 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.621 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.621 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.879 { 00:15:31.879 "cntlid": 35, 00:15:31.879 "qid": 0, 00:15:31.879 "state": "enabled", 00:15:31.879 "thread": "nvmf_tgt_poll_group_000", 00:15:31.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:31.879 "listen_address": { 00:15:31.879 "trtype": "TCP", 00:15:31.879 "adrfam": "IPv4", 00:15:31.879 "traddr": "10.0.0.2", 00:15:31.879 "trsvcid": "4420" 00:15:31.879 }, 00:15:31.879 "peer_address": { 00:15:31.879 "trtype": "TCP", 00:15:31.879 "adrfam": "IPv4", 00:15:31.879 "traddr": "10.0.0.1", 00:15:31.879 "trsvcid": "33854" 00:15:31.879 }, 00:15:31.879 "auth": { 00:15:31.879 "state": "completed", 00:15:31.879 "digest": "sha256", 00:15:31.879 "dhgroup": "ffdhe6144" 00:15:31.879 } 00:15:31.879 } 00:15:31.879 ]' 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.879 10:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.136 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:32.136 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:32.702 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.702 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:32.702 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.702 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.702 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.702 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.702 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.702 10:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.961 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.527 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.527 { 00:15:33.527 "cntlid": 37, 00:15:33.527 "qid": 0, 00:15:33.527 "state": "enabled", 00:15:33.527 "thread": "nvmf_tgt_poll_group_000", 00:15:33.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:33.527 "listen_address": { 00:15:33.527 "trtype": "TCP", 00:15:33.527 "adrfam": "IPv4", 00:15:33.527 "traddr": "10.0.0.2", 00:15:33.527 "trsvcid": "4420" 00:15:33.527 }, 00:15:33.527 "peer_address": { 00:15:33.527 "trtype": "TCP", 00:15:33.527 "adrfam": "IPv4", 00:15:33.527 "traddr": "10.0.0.1", 00:15:33.527 "trsvcid": "33884" 00:15:33.527 }, 00:15:33.527 "auth": { 00:15:33.527 "state": "completed", 00:15:33.527 "digest": "sha256", 00:15:33.527 "dhgroup": "ffdhe6144" 00:15:33.527 } 00:15:33.527 } 00:15:33.527 ]' 00:15:33.527 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.785 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.785 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.785 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.785 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.785 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.785 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.785 10:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.044 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:34.044 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:34.609 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.609 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:34.609 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.609 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.609 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.609 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.609 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.609 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.868 10:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.435 00:15:35.435 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.435 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.435 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.435 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.435 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.435 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.435 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.694 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.694 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.694 { 00:15:35.694 "cntlid": 39, 00:15:35.694 "qid": 0, 00:15:35.694 "state": "enabled", 00:15:35.694 "thread": "nvmf_tgt_poll_group_000", 00:15:35.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:35.695 "listen_address": { 00:15:35.695 "trtype": "TCP", 00:15:35.695 "adrfam": "IPv4", 00:15:35.695 "traddr": "10.0.0.2", 00:15:35.695 "trsvcid": "4420" 00:15:35.695 }, 00:15:35.695 "peer_address": { 00:15:35.695 "trtype": "TCP", 00:15:35.695 "adrfam": "IPv4", 00:15:35.695 "traddr": "10.0.0.1", 00:15:35.695 "trsvcid": "35844" 00:15:35.695 }, 00:15:35.695 "auth": { 00:15:35.695 "state": "completed", 00:15:35.695 "digest": "sha256", 00:15:35.695 "dhgroup": "ffdhe6144" 00:15:35.695 } 00:15:35.695 } 00:15:35.695 ]' 00:15:35.695 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.695 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.695 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.695 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.695 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.695 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.695 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.695 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.953 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:35.953 10:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.521 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.780 10:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.348 00:15:37.607 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.607 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.607 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.866 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.866 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.866 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.866 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.866 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.866 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.866 { 00:15:37.866 "cntlid": 41, 00:15:37.866 "qid": 0, 00:15:37.866 "state": "enabled", 00:15:37.866 "thread": "nvmf_tgt_poll_group_000", 00:15:37.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:37.866 "listen_address": { 00:15:37.866 "trtype": "TCP", 00:15:37.866 "adrfam": "IPv4", 00:15:37.866 "traddr": "10.0.0.2", 00:15:37.866 "trsvcid": "4420" 00:15:37.866 }, 00:15:37.866 "peer_address": { 00:15:37.866 "trtype": "TCP", 00:15:37.866 "adrfam": "IPv4", 00:15:37.866 "traddr": "10.0.0.1", 00:15:37.867 "trsvcid": "35866" 00:15:37.867 }, 00:15:37.867 "auth": { 00:15:37.867 "state": "completed", 00:15:37.867 "digest": "sha256", 00:15:37.867 "dhgroup": "ffdhe8192" 00:15:37.867 } 00:15:37.867 } 00:15:37.867 ]' 00:15:37.867 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.867 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.867 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.867 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.867 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.867 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.867 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.867 10:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.125 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:38.125 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:38.690 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.690 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:38.690 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.690 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.690 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.690 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.690 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.690 10:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.255 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.821 00:15:39.821 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.821 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.821 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.079 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.079 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.079 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.079 10:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.079 { 00:15:40.079 "cntlid": 43, 00:15:40.079 "qid": 0, 00:15:40.079 "state": "enabled", 00:15:40.079 "thread": "nvmf_tgt_poll_group_000", 00:15:40.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:40.079 "listen_address": { 00:15:40.079 "trtype": "TCP", 00:15:40.079 "adrfam": "IPv4", 00:15:40.079 "traddr": "10.0.0.2", 00:15:40.079 "trsvcid": "4420" 00:15:40.079 }, 00:15:40.079 "peer_address": { 00:15:40.079 "trtype": "TCP", 00:15:40.079 "adrfam": "IPv4", 00:15:40.079 "traddr": "10.0.0.1", 00:15:40.079 "trsvcid": "35880" 00:15:40.079 }, 00:15:40.079 "auth": { 00:15:40.079 "state": "completed", 00:15:40.079 "digest": "sha256", 00:15:40.079 "dhgroup": "ffdhe8192" 00:15:40.079 } 00:15:40.079 } 00:15:40.079 ]' 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.079 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.416 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:40.416 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:40.983 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.983 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:40.983 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.983 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.983 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.983 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.983 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:40.983 10:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.241 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.807 00:15:41.807 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.807 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.807 10:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.067 { 00:15:42.067 "cntlid": 45, 00:15:42.067 "qid": 0, 00:15:42.067 "state": "enabled", 00:15:42.067 "thread": "nvmf_tgt_poll_group_000", 00:15:42.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:42.067 "listen_address": { 00:15:42.067 "trtype": "TCP", 00:15:42.067 "adrfam": "IPv4", 00:15:42.067 "traddr": "10.0.0.2", 00:15:42.067 "trsvcid": "4420" 00:15:42.067 }, 00:15:42.067 "peer_address": { 00:15:42.067 "trtype": "TCP", 00:15:42.067 "adrfam": "IPv4", 00:15:42.067 "traddr": "10.0.0.1", 00:15:42.067 "trsvcid": "35910" 00:15:42.067 }, 00:15:42.067 "auth": { 00:15:42.067 "state": "completed", 00:15:42.067 "digest": "sha256", 00:15:42.067 "dhgroup": "ffdhe8192" 00:15:42.067 } 00:15:42.067 } 00:15:42.067 ]' 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.067 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.326 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.326 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.326 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.584 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:42.584 10:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:43.151 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.151 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:43.151 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.151 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.151 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.151 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.151 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.151 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.410 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.977 00:15:43.977 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.977 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.977 10:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.237 { 00:15:44.237 "cntlid": 47, 00:15:44.237 "qid": 0, 00:15:44.237 "state": "enabled", 00:15:44.237 "thread": "nvmf_tgt_poll_group_000", 00:15:44.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:44.237 "listen_address": { 00:15:44.237 "trtype": "TCP", 00:15:44.237 "adrfam": "IPv4", 00:15:44.237 "traddr": "10.0.0.2", 00:15:44.237 "trsvcid": "4420" 00:15:44.237 }, 00:15:44.237 "peer_address": { 00:15:44.237 "trtype": "TCP", 00:15:44.237 "adrfam": "IPv4", 00:15:44.237 "traddr": "10.0.0.1", 00:15:44.237 "trsvcid": "35932" 00:15:44.237 }, 00:15:44.237 "auth": { 00:15:44.237 "state": "completed", 00:15:44.237 "digest": "sha256", 00:15:44.237 "dhgroup": "ffdhe8192" 00:15:44.237 } 00:15:44.237 } 00:15:44.237 ]' 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:44.237 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.496 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.496 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.496 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.496 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:44.496 10:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.434 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.693 00:15:45.693 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.693 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.693 10:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.951 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.951 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.951 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.951 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.951 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.951 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.951 { 00:15:45.951 "cntlid": 49, 00:15:45.951 "qid": 0, 00:15:45.951 "state": "enabled", 00:15:45.951 "thread": "nvmf_tgt_poll_group_000", 00:15:45.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:45.951 "listen_address": { 00:15:45.951 "trtype": "TCP", 00:15:45.951 "adrfam": "IPv4", 00:15:45.951 "traddr": "10.0.0.2", 00:15:45.951 "trsvcid": "4420" 00:15:45.951 }, 00:15:45.951 "peer_address": { 00:15:45.951 "trtype": "TCP", 00:15:45.951 "adrfam": "IPv4", 00:15:45.951 "traddr": "10.0.0.1", 00:15:45.951 "trsvcid": "60150" 00:15:45.951 }, 00:15:45.951 "auth": { 00:15:45.951 "state": "completed", 00:15:45.951 "digest": "sha384", 00:15:45.951 "dhgroup": "null" 00:15:45.951 } 00:15:45.951 } 00:15:45.951 ]' 00:15:45.951 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.210 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.210 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.210 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.210 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.210 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.210 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.210 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.469 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:46.469 10:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:47.037 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.037 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:47.037 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.037 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.037 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.037 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.037 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.037 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.296 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.297 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.555 00:15:47.555 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.555 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.555 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.814 { 00:15:47.814 "cntlid": 51, 00:15:47.814 "qid": 0, 00:15:47.814 "state": "enabled", 00:15:47.814 "thread": "nvmf_tgt_poll_group_000", 00:15:47.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:47.814 "listen_address": { 00:15:47.814 "trtype": "TCP", 00:15:47.814 "adrfam": "IPv4", 00:15:47.814 "traddr": "10.0.0.2", 00:15:47.814 "trsvcid": "4420" 00:15:47.814 }, 00:15:47.814 "peer_address": { 00:15:47.814 "trtype": "TCP", 00:15:47.814 "adrfam": "IPv4", 00:15:47.814 "traddr": "10.0.0.1", 00:15:47.814 "trsvcid": "60158" 00:15:47.814 }, 00:15:47.814 "auth": { 00:15:47.814 "state": "completed", 00:15:47.814 "digest": "sha384", 00:15:47.814 "dhgroup": "null" 00:15:47.814 } 00:15:47.814 } 00:15:47.814 ]' 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.814 10:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.073 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:48.073 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:48.640 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.913 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:48.913 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.913 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.914 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.914 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.914 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.914 10:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.914 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.172 00:15:49.429 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.429 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.429 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.691 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.692 { 00:15:49.692 "cntlid": 53, 00:15:49.692 "qid": 0, 00:15:49.692 "state": "enabled", 00:15:49.692 "thread": "nvmf_tgt_poll_group_000", 00:15:49.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:49.692 "listen_address": { 00:15:49.692 "trtype": "TCP", 00:15:49.692 "adrfam": "IPv4", 00:15:49.692 "traddr": "10.0.0.2", 00:15:49.692 "trsvcid": "4420" 00:15:49.692 }, 00:15:49.692 "peer_address": { 00:15:49.692 "trtype": "TCP", 00:15:49.692 "adrfam": "IPv4", 00:15:49.692 "traddr": "10.0.0.1", 00:15:49.692 "trsvcid": "60182" 00:15:49.692 }, 00:15:49.692 "auth": { 00:15:49.692 "state": "completed", 00:15:49.692 "digest": "sha384", 00:15:49.692 "dhgroup": "null" 00:15:49.692 } 00:15:49.692 } 00:15:49.692 ]' 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.692 10:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.956 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:49.956 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:50.522 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.522 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:50.522 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.781 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.781 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.781 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.781 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:50.781 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.040 10:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.297 00:15:51.297 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.297 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.297 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.555 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.555 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.555 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.555 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.555 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.555 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.555 { 00:15:51.555 "cntlid": 55, 00:15:51.555 "qid": 0, 00:15:51.555 "state": "enabled", 00:15:51.556 "thread": "nvmf_tgt_poll_group_000", 00:15:51.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:51.556 "listen_address": { 00:15:51.556 "trtype": "TCP", 00:15:51.556 "adrfam": "IPv4", 00:15:51.556 "traddr": "10.0.0.2", 00:15:51.556 "trsvcid": "4420" 00:15:51.556 }, 00:15:51.556 "peer_address": { 00:15:51.556 "trtype": "TCP", 00:15:51.556 "adrfam": "IPv4", 00:15:51.556 "traddr": "10.0.0.1", 00:15:51.556 "trsvcid": "60206" 00:15:51.556 }, 00:15:51.556 "auth": { 00:15:51.556 "state": "completed", 00:15:51.556 "digest": "sha384", 00:15:51.556 "dhgroup": "null" 00:15:51.556 } 00:15:51.556 } 00:15:51.556 ]' 00:15:51.556 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.556 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.556 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.556 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:51.556 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.556 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.556 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.556 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.813 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:51.813 10:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.380 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.945 10:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.203 00:15:53.203 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.203 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.203 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.463 { 00:15:53.463 "cntlid": 57, 00:15:53.463 "qid": 0, 00:15:53.463 "state": "enabled", 00:15:53.463 "thread": "nvmf_tgt_poll_group_000", 00:15:53.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:53.463 "listen_address": { 00:15:53.463 "trtype": "TCP", 00:15:53.463 "adrfam": "IPv4", 00:15:53.463 "traddr": "10.0.0.2", 00:15:53.463 "trsvcid": "4420" 00:15:53.463 }, 00:15:53.463 "peer_address": { 00:15:53.463 "trtype": "TCP", 00:15:53.463 "adrfam": "IPv4", 00:15:53.463 "traddr": "10.0.0.1", 00:15:53.463 "trsvcid": "60244" 00:15:53.463 }, 00:15:53.463 "auth": { 00:15:53.463 "state": "completed", 00:15:53.463 "digest": "sha384", 00:15:53.463 "dhgroup": "ffdhe2048" 00:15:53.463 } 00:15:53.463 } 00:15:53.463 ]' 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.463 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.735 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.735 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.735 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.027 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:54.027 10:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:15:54.615 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.615 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:54.615 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.615 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.615 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.615 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.615 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.615 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.874 10:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.132 00:15:55.132 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.132 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.132 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.390 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.390 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.390 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.390 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.390 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.390 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.390 { 00:15:55.390 "cntlid": 59, 00:15:55.390 "qid": 0, 00:15:55.390 "state": "enabled", 00:15:55.390 "thread": "nvmf_tgt_poll_group_000", 00:15:55.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:55.390 "listen_address": { 00:15:55.390 "trtype": "TCP", 00:15:55.390 "adrfam": "IPv4", 00:15:55.390 "traddr": "10.0.0.2", 00:15:55.390 "trsvcid": "4420" 00:15:55.390 }, 00:15:55.390 "peer_address": { 00:15:55.390 "trtype": "TCP", 00:15:55.390 "adrfam": "IPv4", 00:15:55.390 "traddr": "10.0.0.1", 00:15:55.390 "trsvcid": "50236" 00:15:55.390 }, 00:15:55.390 "auth": { 00:15:55.390 "state": "completed", 00:15:55.390 "digest": "sha384", 00:15:55.390 "dhgroup": "ffdhe2048" 00:15:55.390 } 00:15:55.390 } 00:15:55.390 ]' 00:15:55.390 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.649 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.649 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.649 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.649 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.649 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.649 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.649 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.909 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:55.909 10:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:15:56.490 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.490 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:56.490 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.490 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.490 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.490 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.490 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:56.490 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.748 10:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.008 00:15:57.268 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.268 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.268 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.528 { 00:15:57.528 "cntlid": 61, 00:15:57.528 "qid": 0, 00:15:57.528 "state": "enabled", 00:15:57.528 "thread": "nvmf_tgt_poll_group_000", 00:15:57.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:57.528 "listen_address": { 00:15:57.528 "trtype": "TCP", 00:15:57.528 "adrfam": "IPv4", 00:15:57.528 "traddr": "10.0.0.2", 00:15:57.528 "trsvcid": "4420" 00:15:57.528 }, 00:15:57.528 "peer_address": { 00:15:57.528 "trtype": "TCP", 00:15:57.528 "adrfam": "IPv4", 00:15:57.528 "traddr": "10.0.0.1", 00:15:57.528 "trsvcid": "50274" 00:15:57.528 }, 00:15:57.528 "auth": { 00:15:57.528 "state": "completed", 00:15:57.528 "digest": "sha384", 00:15:57.528 "dhgroup": "ffdhe2048" 00:15:57.528 } 00:15:57.528 } 00:15:57.528 ]' 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.528 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.789 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:57.789 10:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:15:58.394 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.394 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:15:58.394 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.394 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.394 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.394 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.394 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.394 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.652 10:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.911 00:15:58.911 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.911 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.911 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.170 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.170 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.170 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.170 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.170 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.170 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.170 { 00:15:59.170 "cntlid": 63, 00:15:59.170 "qid": 0, 00:15:59.170 "state": "enabled", 00:15:59.170 "thread": "nvmf_tgt_poll_group_000", 00:15:59.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:15:59.170 "listen_address": { 00:15:59.170 "trtype": "TCP", 00:15:59.170 "adrfam": "IPv4", 00:15:59.170 "traddr": "10.0.0.2", 00:15:59.170 "trsvcid": "4420" 00:15:59.170 }, 00:15:59.170 "peer_address": { 00:15:59.170 "trtype": "TCP", 00:15:59.170 "adrfam": "IPv4", 00:15:59.170 "traddr": "10.0.0.1", 00:15:59.170 "trsvcid": "50292" 00:15:59.170 }, 00:15:59.170 "auth": { 00:15:59.170 "state": "completed", 00:15:59.170 "digest": "sha384", 00:15:59.170 "dhgroup": "ffdhe2048" 00:15:59.170 } 00:15:59.170 } 00:15:59.170 ]' 00:15:59.170 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.429 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.429 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.429 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.429 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.429 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.429 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.429 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.689 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:15:59.689 10:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.257 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.517 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.777 00:16:01.042 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.042 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.042 10:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.313 { 00:16:01.313 "cntlid": 65, 00:16:01.313 "qid": 0, 00:16:01.313 "state": "enabled", 00:16:01.313 "thread": "nvmf_tgt_poll_group_000", 00:16:01.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:01.313 "listen_address": { 00:16:01.313 "trtype": "TCP", 00:16:01.313 "adrfam": "IPv4", 00:16:01.313 "traddr": "10.0.0.2", 00:16:01.313 "trsvcid": "4420" 00:16:01.313 }, 00:16:01.313 "peer_address": { 00:16:01.313 "trtype": "TCP", 00:16:01.313 "adrfam": "IPv4", 00:16:01.313 "traddr": "10.0.0.1", 00:16:01.313 "trsvcid": "50320" 00:16:01.313 }, 00:16:01.313 "auth": { 00:16:01.313 "state": "completed", 00:16:01.313 "digest": "sha384", 00:16:01.313 "dhgroup": "ffdhe3072" 00:16:01.313 } 00:16:01.313 } 00:16:01.313 ]' 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.313 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.572 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:01.572 10:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.511 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.769 00:16:02.769 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.769 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.769 10:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.028 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.028 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.028 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.028 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.286 { 00:16:03.286 "cntlid": 67, 00:16:03.286 "qid": 0, 00:16:03.286 "state": "enabled", 00:16:03.286 "thread": "nvmf_tgt_poll_group_000", 00:16:03.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:03.286 "listen_address": { 00:16:03.286 "trtype": "TCP", 00:16:03.286 "adrfam": "IPv4", 00:16:03.286 "traddr": "10.0.0.2", 00:16:03.286 "trsvcid": "4420" 00:16:03.286 }, 00:16:03.286 "peer_address": { 00:16:03.286 "trtype": "TCP", 00:16:03.286 "adrfam": "IPv4", 00:16:03.286 "traddr": "10.0.0.1", 00:16:03.286 "trsvcid": "50346" 00:16:03.286 }, 00:16:03.286 "auth": { 00:16:03.286 "state": "completed", 00:16:03.286 "digest": "sha384", 00:16:03.286 "dhgroup": "ffdhe3072" 00:16:03.286 } 00:16:03.286 } 00:16:03.286 ]' 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.286 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.545 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:03.546 10:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:04.114 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.114 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:04.114 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.114 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.114 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.114 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.114 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.114 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.374 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.633 00:16:04.892 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.892 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.892 10:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.892 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.892 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.892 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.892 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.152 { 00:16:05.152 "cntlid": 69, 00:16:05.152 "qid": 0, 00:16:05.152 "state": "enabled", 00:16:05.152 "thread": "nvmf_tgt_poll_group_000", 00:16:05.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:05.152 "listen_address": { 00:16:05.152 "trtype": "TCP", 00:16:05.152 "adrfam": "IPv4", 00:16:05.152 "traddr": "10.0.0.2", 00:16:05.152 "trsvcid": "4420" 00:16:05.152 }, 00:16:05.152 "peer_address": { 00:16:05.152 "trtype": "TCP", 00:16:05.152 "adrfam": "IPv4", 00:16:05.152 "traddr": "10.0.0.1", 00:16:05.152 "trsvcid": "60284" 00:16:05.152 }, 00:16:05.152 "auth": { 00:16:05.152 "state": "completed", 00:16:05.152 "digest": "sha384", 00:16:05.152 "dhgroup": "ffdhe3072" 00:16:05.152 } 00:16:05.152 } 00:16:05.152 ]' 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.152 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.411 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:05.411 10:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:06.007 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.007 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:06.007 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.007 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.007 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.007 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.007 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.007 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.266 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.831 00:16:06.831 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.831 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.831 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.089 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.089 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.089 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.089 10:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.089 { 00:16:07.089 "cntlid": 71, 00:16:07.089 "qid": 0, 00:16:07.089 "state": "enabled", 00:16:07.089 "thread": "nvmf_tgt_poll_group_000", 00:16:07.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:07.089 "listen_address": { 00:16:07.089 "trtype": "TCP", 00:16:07.089 "adrfam": "IPv4", 00:16:07.089 "traddr": "10.0.0.2", 00:16:07.089 "trsvcid": "4420" 00:16:07.089 }, 00:16:07.089 "peer_address": { 00:16:07.089 "trtype": "TCP", 00:16:07.089 "adrfam": "IPv4", 00:16:07.089 "traddr": "10.0.0.1", 00:16:07.089 "trsvcid": "60312" 00:16:07.089 }, 00:16:07.089 "auth": { 00:16:07.089 "state": "completed", 00:16:07.089 "digest": "sha384", 00:16:07.089 "dhgroup": "ffdhe3072" 00:16:07.089 } 00:16:07.089 } 00:16:07.089 ]' 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.089 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.349 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:07.349 10:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:07.916 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.175 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:08.175 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.175 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.175 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.176 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.176 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.176 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:08.176 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.435 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.694 00:16:08.694 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.694 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.694 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.953 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.953 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.953 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.953 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.953 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.953 10:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.953 { 00:16:08.953 "cntlid": 73, 00:16:08.953 "qid": 0, 00:16:08.953 "state": "enabled", 00:16:08.953 "thread": "nvmf_tgt_poll_group_000", 00:16:08.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:08.953 "listen_address": { 00:16:08.953 "trtype": "TCP", 00:16:08.953 "adrfam": "IPv4", 00:16:08.953 "traddr": "10.0.0.2", 00:16:08.953 "trsvcid": "4420" 00:16:08.953 }, 00:16:08.953 "peer_address": { 00:16:08.953 "trtype": "TCP", 00:16:08.953 "adrfam": "IPv4", 00:16:08.953 "traddr": "10.0.0.1", 00:16:08.953 "trsvcid": "60332" 00:16:08.953 }, 00:16:08.953 "auth": { 00:16:08.953 "state": "completed", 00:16:08.953 "digest": "sha384", 00:16:08.953 "dhgroup": "ffdhe4096" 00:16:08.953 } 00:16:08.953 } 00:16:08.953 ]' 00:16:08.953 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.953 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.953 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.953 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.953 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.211 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.211 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.211 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.470 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:09.470 10:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:10.036 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.036 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:10.036 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.036 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.036 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.036 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.036 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.036 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.294 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.553 00:16:10.553 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.553 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.553 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.812 { 00:16:10.812 "cntlid": 75, 00:16:10.812 "qid": 0, 00:16:10.812 "state": "enabled", 00:16:10.812 "thread": "nvmf_tgt_poll_group_000", 00:16:10.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:10.812 "listen_address": { 00:16:10.812 "trtype": "TCP", 00:16:10.812 "adrfam": "IPv4", 00:16:10.812 "traddr": "10.0.0.2", 00:16:10.812 "trsvcid": "4420" 00:16:10.812 }, 00:16:10.812 "peer_address": { 00:16:10.812 "trtype": "TCP", 00:16:10.812 "adrfam": "IPv4", 00:16:10.812 "traddr": "10.0.0.1", 00:16:10.812 "trsvcid": "60362" 00:16:10.812 }, 00:16:10.812 "auth": { 00:16:10.812 "state": "completed", 00:16:10.812 "digest": "sha384", 00:16:10.812 "dhgroup": "ffdhe4096" 00:16:10.812 } 00:16:10.812 } 00:16:10.812 ]' 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.812 10:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.071 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.071 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.071 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.071 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.071 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.330 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:11.330 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:11.898 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.898 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:11.898 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.898 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.898 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.898 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.898 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:11.898 10:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.155 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.156 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.156 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.414 00:16:12.414 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.414 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.414 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.672 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.673 { 00:16:12.673 "cntlid": 77, 00:16:12.673 "qid": 0, 00:16:12.673 "state": "enabled", 00:16:12.673 "thread": "nvmf_tgt_poll_group_000", 00:16:12.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:12.673 "listen_address": { 00:16:12.673 "trtype": "TCP", 00:16:12.673 "adrfam": "IPv4", 00:16:12.673 "traddr": "10.0.0.2", 00:16:12.673 "trsvcid": "4420" 00:16:12.673 }, 00:16:12.673 "peer_address": { 00:16:12.673 "trtype": "TCP", 00:16:12.673 "adrfam": "IPv4", 00:16:12.673 "traddr": "10.0.0.1", 00:16:12.673 "trsvcid": "60386" 00:16:12.673 }, 00:16:12.673 "auth": { 00:16:12.673 "state": "completed", 00:16:12.673 "digest": "sha384", 00:16:12.673 "dhgroup": "ffdhe4096" 00:16:12.673 } 00:16:12.673 } 00:16:12.673 ]' 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.673 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.932 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.932 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.932 10:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.190 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:13.190 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:13.759 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.759 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:13.759 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.759 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.759 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.759 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.759 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.759 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.017 10:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.276 00:16:14.276 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.276 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.276 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.536 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.536 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.536 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.536 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.536 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.536 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.536 { 00:16:14.536 "cntlid": 79, 00:16:14.536 "qid": 0, 00:16:14.536 "state": "enabled", 00:16:14.536 "thread": "nvmf_tgt_poll_group_000", 00:16:14.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:14.536 "listen_address": { 00:16:14.536 "trtype": "TCP", 00:16:14.536 "adrfam": "IPv4", 00:16:14.536 "traddr": "10.0.0.2", 00:16:14.536 "trsvcid": "4420" 00:16:14.536 }, 00:16:14.536 "peer_address": { 00:16:14.536 "trtype": "TCP", 00:16:14.536 "adrfam": "IPv4", 00:16:14.536 "traddr": "10.0.0.1", 00:16:14.536 "trsvcid": "49272" 00:16:14.536 }, 00:16:14.536 "auth": { 00:16:14.536 "state": "completed", 00:16:14.536 "digest": "sha384", 00:16:14.536 "dhgroup": "ffdhe4096" 00:16:14.536 } 00:16:14.536 } 00:16:14.536 ]' 00:16:14.536 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.795 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.795 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.795 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.795 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.795 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.795 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.795 10:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.057 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:15.057 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:15.624 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.884 10:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.452 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.452 { 00:16:16.452 "cntlid": 81, 00:16:16.452 "qid": 0, 00:16:16.452 "state": "enabled", 00:16:16.452 "thread": "nvmf_tgt_poll_group_000", 00:16:16.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:16.452 "listen_address": { 00:16:16.452 "trtype": "TCP", 00:16:16.452 "adrfam": "IPv4", 00:16:16.452 "traddr": "10.0.0.2", 00:16:16.452 "trsvcid": "4420" 00:16:16.452 }, 00:16:16.452 "peer_address": { 00:16:16.452 "trtype": "TCP", 00:16:16.452 "adrfam": "IPv4", 00:16:16.452 "traddr": "10.0.0.1", 00:16:16.452 "trsvcid": "49298" 00:16:16.452 }, 00:16:16.452 "auth": { 00:16:16.452 "state": "completed", 00:16:16.452 "digest": "sha384", 00:16:16.452 "dhgroup": "ffdhe6144" 00:16:16.452 } 00:16:16.452 } 00:16:16.452 ]' 00:16:16.452 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.711 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.711 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.711 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.711 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.711 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.711 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.711 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.970 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:16.970 10:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:17.538 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.538 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:17.538 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.538 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.538 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.538 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.538 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.538 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.799 10:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.058 00:16:18.058 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.058 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.058 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.316 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.316 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.316 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.316 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.316 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.316 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.316 { 00:16:18.316 "cntlid": 83, 00:16:18.316 "qid": 0, 00:16:18.316 "state": "enabled", 00:16:18.316 "thread": "nvmf_tgt_poll_group_000", 00:16:18.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:18.316 "listen_address": { 00:16:18.316 "trtype": "TCP", 00:16:18.316 "adrfam": "IPv4", 00:16:18.316 "traddr": "10.0.0.2", 00:16:18.316 "trsvcid": "4420" 00:16:18.316 }, 00:16:18.316 "peer_address": { 00:16:18.316 "trtype": "TCP", 00:16:18.316 "adrfam": "IPv4", 00:16:18.316 "traddr": "10.0.0.1", 00:16:18.316 "trsvcid": "49322" 00:16:18.316 }, 00:16:18.316 "auth": { 00:16:18.316 "state": "completed", 00:16:18.316 "digest": "sha384", 00:16:18.316 "dhgroup": "ffdhe6144" 00:16:18.316 } 00:16:18.316 } 00:16:18.316 ]' 00:16:18.316 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.574 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.574 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.574 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.574 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.574 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.574 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.574 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.833 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:18.833 10:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:19.400 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.400 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:19.400 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.400 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.400 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.400 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.400 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.400 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.659 10:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.227 00:16:20.227 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.227 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.227 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.486 { 00:16:20.486 "cntlid": 85, 00:16:20.486 "qid": 0, 00:16:20.486 "state": "enabled", 00:16:20.486 "thread": "nvmf_tgt_poll_group_000", 00:16:20.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:20.486 "listen_address": { 00:16:20.486 "trtype": "TCP", 00:16:20.486 "adrfam": "IPv4", 00:16:20.486 "traddr": "10.0.0.2", 00:16:20.486 "trsvcid": "4420" 00:16:20.486 }, 00:16:20.486 "peer_address": { 00:16:20.486 "trtype": "TCP", 00:16:20.486 "adrfam": "IPv4", 00:16:20.486 "traddr": "10.0.0.1", 00:16:20.486 "trsvcid": "49344" 00:16:20.486 }, 00:16:20.486 "auth": { 00:16:20.486 "state": "completed", 00:16:20.486 "digest": "sha384", 00:16:20.486 "dhgroup": "ffdhe6144" 00:16:20.486 } 00:16:20.486 } 00:16:20.486 ]' 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.486 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.746 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:20.746 10:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:21.314 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.314 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:21.314 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.314 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.314 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.314 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.314 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.314 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.573 10:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.141 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.141 { 00:16:22.141 "cntlid": 87, 00:16:22.141 "qid": 0, 00:16:22.141 "state": "enabled", 00:16:22.141 "thread": "nvmf_tgt_poll_group_000", 00:16:22.141 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:22.141 "listen_address": { 00:16:22.141 "trtype": "TCP", 00:16:22.141 "adrfam": "IPv4", 00:16:22.141 "traddr": "10.0.0.2", 00:16:22.141 "trsvcid": "4420" 00:16:22.141 }, 00:16:22.141 "peer_address": { 00:16:22.141 "trtype": "TCP", 00:16:22.141 "adrfam": "IPv4", 00:16:22.141 "traddr": "10.0.0.1", 00:16:22.141 "trsvcid": "49380" 00:16:22.141 }, 00:16:22.141 "auth": { 00:16:22.141 "state": "completed", 00:16:22.141 "digest": "sha384", 00:16:22.141 "dhgroup": "ffdhe6144" 00:16:22.141 } 00:16:22.141 } 00:16:22.141 ]' 00:16:22.141 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.400 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.400 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.400 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.400 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.400 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.400 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.400 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.659 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:22.659 10:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.227 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.486 10:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.054 00:16:24.054 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.054 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.054 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.313 { 00:16:24.313 "cntlid": 89, 00:16:24.313 "qid": 0, 00:16:24.313 "state": "enabled", 00:16:24.313 "thread": "nvmf_tgt_poll_group_000", 00:16:24.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:24.313 "listen_address": { 00:16:24.313 "trtype": "TCP", 00:16:24.313 "adrfam": "IPv4", 00:16:24.313 "traddr": "10.0.0.2", 00:16:24.313 "trsvcid": "4420" 00:16:24.313 }, 00:16:24.313 "peer_address": { 00:16:24.313 "trtype": "TCP", 00:16:24.313 "adrfam": "IPv4", 00:16:24.313 "traddr": "10.0.0.1", 00:16:24.313 "trsvcid": "49396" 00:16:24.313 }, 00:16:24.313 "auth": { 00:16:24.313 "state": "completed", 00:16:24.313 "digest": "sha384", 00:16:24.313 "dhgroup": "ffdhe8192" 00:16:24.313 } 00:16:24.313 } 00:16:24.313 ]' 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.313 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.572 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:24.572 10:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:25.141 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.141 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:25.141 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.141 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.141 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.141 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.141 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.141 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.400 10:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.968 00:16:25.968 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.968 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.968 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.256 { 00:16:26.256 "cntlid": 91, 00:16:26.256 "qid": 0, 00:16:26.256 "state": "enabled", 00:16:26.256 "thread": "nvmf_tgt_poll_group_000", 00:16:26.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:26.256 "listen_address": { 00:16:26.256 "trtype": "TCP", 00:16:26.256 "adrfam": "IPv4", 00:16:26.256 "traddr": "10.0.0.2", 00:16:26.256 "trsvcid": "4420" 00:16:26.256 }, 00:16:26.256 "peer_address": { 00:16:26.256 "trtype": "TCP", 00:16:26.256 "adrfam": "IPv4", 00:16:26.256 "traddr": "10.0.0.1", 00:16:26.256 "trsvcid": "58684" 00:16:26.256 }, 00:16:26.256 "auth": { 00:16:26.256 "state": "completed", 00:16:26.256 "digest": "sha384", 00:16:26.256 "dhgroup": "ffdhe8192" 00:16:26.256 } 00:16:26.256 } 00:16:26.256 ]' 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.256 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.565 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:26.565 10:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:27.131 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.131 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:27.131 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.131 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.131 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.131 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.131 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:27.131 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.390 10:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.957 00:16:27.957 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.957 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.957 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.216 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.216 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.216 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.216 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.216 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.216 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.216 { 00:16:28.216 "cntlid": 93, 00:16:28.216 "qid": 0, 00:16:28.216 "state": "enabled", 00:16:28.216 "thread": "nvmf_tgt_poll_group_000", 00:16:28.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:28.216 "listen_address": { 00:16:28.216 "trtype": "TCP", 00:16:28.216 "adrfam": "IPv4", 00:16:28.216 "traddr": "10.0.0.2", 00:16:28.216 "trsvcid": "4420" 00:16:28.216 }, 00:16:28.216 "peer_address": { 00:16:28.216 "trtype": "TCP", 00:16:28.216 "adrfam": "IPv4", 00:16:28.216 "traddr": "10.0.0.1", 00:16:28.216 "trsvcid": "58702" 00:16:28.216 }, 00:16:28.216 "auth": { 00:16:28.216 "state": "completed", 00:16:28.216 "digest": "sha384", 00:16:28.216 "dhgroup": "ffdhe8192" 00:16:28.216 } 00:16:28.216 } 00:16:28.216 ]' 00:16:28.216 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.531 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.531 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.531 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.531 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.531 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.531 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.531 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.788 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:28.788 10:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:29.353 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.353 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:29.353 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.353 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.353 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.353 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.353 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:29.353 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.611 10:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.179 00:16:30.179 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.179 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.179 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.438 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.438 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.438 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.438 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.439 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.439 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.439 { 00:16:30.439 "cntlid": 95, 00:16:30.439 "qid": 0, 00:16:30.439 "state": "enabled", 00:16:30.439 "thread": "nvmf_tgt_poll_group_000", 00:16:30.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:30.439 "listen_address": { 00:16:30.439 "trtype": "TCP", 00:16:30.439 "adrfam": "IPv4", 00:16:30.439 "traddr": "10.0.0.2", 00:16:30.439 "trsvcid": "4420" 00:16:30.439 }, 00:16:30.439 "peer_address": { 00:16:30.439 "trtype": "TCP", 00:16:30.439 "adrfam": "IPv4", 00:16:30.439 "traddr": "10.0.0.1", 00:16:30.439 "trsvcid": "58738" 00:16:30.439 }, 00:16:30.439 "auth": { 00:16:30.439 "state": "completed", 00:16:30.439 "digest": "sha384", 00:16:30.439 "dhgroup": "ffdhe8192" 00:16:30.439 } 00:16:30.439 } 00:16:30.439 ]' 00:16:30.439 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.439 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.439 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.439 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.439 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.698 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.698 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.698 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.698 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:30.698 10:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:31.265 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.524 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.782 10:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.041 00:16:32.041 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.041 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.041 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.300 { 00:16:32.300 "cntlid": 97, 00:16:32.300 "qid": 0, 00:16:32.300 "state": "enabled", 00:16:32.300 "thread": "nvmf_tgt_poll_group_000", 00:16:32.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:32.300 "listen_address": { 00:16:32.300 "trtype": "TCP", 00:16:32.300 "adrfam": "IPv4", 00:16:32.300 "traddr": "10.0.0.2", 00:16:32.300 "trsvcid": "4420" 00:16:32.300 }, 00:16:32.300 "peer_address": { 00:16:32.300 "trtype": "TCP", 00:16:32.300 "adrfam": "IPv4", 00:16:32.300 "traddr": "10.0.0.1", 00:16:32.300 "trsvcid": "58772" 00:16:32.300 }, 00:16:32.300 "auth": { 00:16:32.300 "state": "completed", 00:16:32.300 "digest": "sha512", 00:16:32.300 "dhgroup": "null" 00:16:32.300 } 00:16:32.300 } 00:16:32.300 ]' 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.300 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.559 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:32.559 10:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:33.126 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.126 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:33.126 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.126 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.126 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.126 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.126 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.126 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.406 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:33.406 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.406 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.406 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.406 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:33.407 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.407 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.407 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.407 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.407 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.407 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.407 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.407 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.665 00:16:33.922 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.922 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.922 11:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.922 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.922 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.922 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.922 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.180 { 00:16:34.180 "cntlid": 99, 00:16:34.180 "qid": 0, 00:16:34.180 "state": "enabled", 00:16:34.180 "thread": "nvmf_tgt_poll_group_000", 00:16:34.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:34.180 "listen_address": { 00:16:34.180 "trtype": "TCP", 00:16:34.180 "adrfam": "IPv4", 00:16:34.180 "traddr": "10.0.0.2", 00:16:34.180 "trsvcid": "4420" 00:16:34.180 }, 00:16:34.180 "peer_address": { 00:16:34.180 "trtype": "TCP", 00:16:34.180 "adrfam": "IPv4", 00:16:34.180 "traddr": "10.0.0.1", 00:16:34.180 "trsvcid": "58812" 00:16:34.180 }, 00:16:34.180 "auth": { 00:16:34.180 "state": "completed", 00:16:34.180 "digest": "sha512", 00:16:34.180 "dhgroup": "null" 00:16:34.180 } 00:16:34.180 } 00:16:34.180 ]' 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.180 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.437 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:34.437 11:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:35.000 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.000 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:35.000 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.000 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.000 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.000 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.000 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.000 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.258 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.515 00:16:35.774 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.774 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.774 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.031 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.031 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.031 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.031 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.031 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.031 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.031 { 00:16:36.031 "cntlid": 101, 00:16:36.031 "qid": 0, 00:16:36.031 "state": "enabled", 00:16:36.031 "thread": "nvmf_tgt_poll_group_000", 00:16:36.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:36.031 "listen_address": { 00:16:36.031 "trtype": "TCP", 00:16:36.031 "adrfam": "IPv4", 00:16:36.031 "traddr": "10.0.0.2", 00:16:36.031 "trsvcid": "4420" 00:16:36.031 }, 00:16:36.031 "peer_address": { 00:16:36.031 "trtype": "TCP", 00:16:36.031 "adrfam": "IPv4", 00:16:36.031 "traddr": "10.0.0.1", 00:16:36.031 "trsvcid": "38156" 00:16:36.031 }, 00:16:36.031 "auth": { 00:16:36.031 "state": "completed", 00:16:36.031 "digest": "sha512", 00:16:36.031 "dhgroup": "null" 00:16:36.031 } 00:16:36.031 } 00:16:36.031 ]' 00:16:36.031 11:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.031 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.031 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.031 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:36.031 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.031 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.031 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.031 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.318 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:36.318 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:36.883 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.883 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:36.883 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.883 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.883 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.883 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.883 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:36.883 11:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.448 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.705 00:16:37.705 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.705 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.705 11:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.963 { 00:16:37.963 "cntlid": 103, 00:16:37.963 "qid": 0, 00:16:37.963 "state": "enabled", 00:16:37.963 "thread": "nvmf_tgt_poll_group_000", 00:16:37.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:37.963 "listen_address": { 00:16:37.963 "trtype": "TCP", 00:16:37.963 "adrfam": "IPv4", 00:16:37.963 "traddr": "10.0.0.2", 00:16:37.963 "trsvcid": "4420" 00:16:37.963 }, 00:16:37.963 "peer_address": { 00:16:37.963 "trtype": "TCP", 00:16:37.963 "adrfam": "IPv4", 00:16:37.963 "traddr": "10.0.0.1", 00:16:37.963 "trsvcid": "38178" 00:16:37.963 }, 00:16:37.963 "auth": { 00:16:37.963 "state": "completed", 00:16:37.963 "digest": "sha512", 00:16:37.963 "dhgroup": "null" 00:16:37.963 } 00:16:37.963 } 00:16:37.963 ]' 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:37.963 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.220 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.220 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.220 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.477 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:38.478 11:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.044 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.610 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.868 00:16:39.868 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.868 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.868 11:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.125 { 00:16:40.125 "cntlid": 105, 00:16:40.125 "qid": 0, 00:16:40.125 "state": "enabled", 00:16:40.125 "thread": "nvmf_tgt_poll_group_000", 00:16:40.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:40.125 "listen_address": { 00:16:40.125 "trtype": "TCP", 00:16:40.125 "adrfam": "IPv4", 00:16:40.125 "traddr": "10.0.0.2", 00:16:40.125 "trsvcid": "4420" 00:16:40.125 }, 00:16:40.125 "peer_address": { 00:16:40.125 "trtype": "TCP", 00:16:40.125 "adrfam": "IPv4", 00:16:40.125 "traddr": "10.0.0.1", 00:16:40.125 "trsvcid": "38196" 00:16:40.125 }, 00:16:40.125 "auth": { 00:16:40.125 "state": "completed", 00:16:40.125 "digest": "sha512", 00:16:40.125 "dhgroup": "ffdhe2048" 00:16:40.125 } 00:16:40.125 } 00:16:40.125 ]' 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.125 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.383 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.383 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.383 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.640 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:40.641 11:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:41.206 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.206 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:41.206 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.206 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.207 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.207 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.207 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.207 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:41.464 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:41.464 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.464 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.464 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:41.464 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.464 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.464 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.464 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.465 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.465 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.465 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.465 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.465 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.032 00:16:42.032 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.032 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.032 11:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.032 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.032 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.032 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.032 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.290 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.290 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.291 { 00:16:42.291 "cntlid": 107, 00:16:42.291 "qid": 0, 00:16:42.291 "state": "enabled", 00:16:42.291 "thread": "nvmf_tgt_poll_group_000", 00:16:42.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:42.291 "listen_address": { 00:16:42.291 "trtype": "TCP", 00:16:42.291 "adrfam": "IPv4", 00:16:42.291 "traddr": "10.0.0.2", 00:16:42.291 "trsvcid": "4420" 00:16:42.291 }, 00:16:42.291 "peer_address": { 00:16:42.291 "trtype": "TCP", 00:16:42.291 "adrfam": "IPv4", 00:16:42.291 "traddr": "10.0.0.1", 00:16:42.291 "trsvcid": "38212" 00:16:42.291 }, 00:16:42.291 "auth": { 00:16:42.291 "state": "completed", 00:16:42.291 "digest": "sha512", 00:16:42.291 "dhgroup": "ffdhe2048" 00:16:42.291 } 00:16:42.291 } 00:16:42.291 ]' 00:16:42.291 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.291 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.291 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.291 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.291 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.291 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.291 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.291 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.617 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:42.617 11:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:43.183 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.183 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:43.183 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.183 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.183 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.183 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.183 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.183 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.441 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:43.441 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.442 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.700 00:16:43.700 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.700 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.700 11:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.957 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.957 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.957 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.957 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.957 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.957 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.957 { 00:16:43.957 "cntlid": 109, 00:16:43.957 "qid": 0, 00:16:43.957 "state": "enabled", 00:16:43.957 "thread": "nvmf_tgt_poll_group_000", 00:16:43.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:43.957 "listen_address": { 00:16:43.957 "trtype": "TCP", 00:16:43.957 "adrfam": "IPv4", 00:16:43.957 "traddr": "10.0.0.2", 00:16:43.957 "trsvcid": "4420" 00:16:43.957 }, 00:16:43.957 "peer_address": { 00:16:43.957 "trtype": "TCP", 00:16:43.957 "adrfam": "IPv4", 00:16:43.957 "traddr": "10.0.0.1", 00:16:43.957 "trsvcid": "38236" 00:16:43.957 }, 00:16:43.957 "auth": { 00:16:43.957 "state": "completed", 00:16:43.957 "digest": "sha512", 00:16:43.957 "dhgroup": "ffdhe2048" 00:16:43.957 } 00:16:43.957 } 00:16:43.957 ]' 00:16:43.957 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.957 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.958 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.216 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:44.216 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.216 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.216 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.216 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.473 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:44.473 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:45.039 11:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.039 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:45.039 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.039 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.039 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.039 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.039 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.039 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.297 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.298 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.298 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.555 00:16:45.555 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.555 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.555 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.813 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.813 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.813 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.813 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.813 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.813 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.813 { 00:16:45.813 "cntlid": 111, 00:16:45.813 "qid": 0, 00:16:45.813 "state": "enabled", 00:16:45.813 "thread": "nvmf_tgt_poll_group_000", 00:16:45.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:45.813 "listen_address": { 00:16:45.813 "trtype": "TCP", 00:16:45.813 "adrfam": "IPv4", 00:16:45.813 "traddr": "10.0.0.2", 00:16:45.813 "trsvcid": "4420" 00:16:45.813 }, 00:16:45.813 "peer_address": { 00:16:45.813 "trtype": "TCP", 00:16:45.813 "adrfam": "IPv4", 00:16:45.813 "traddr": "10.0.0.1", 00:16:45.813 "trsvcid": "56366" 00:16:45.813 }, 00:16:45.813 "auth": { 00:16:45.813 "state": "completed", 00:16:45.813 "digest": "sha512", 00:16:45.813 "dhgroup": "ffdhe2048" 00:16:45.813 } 00:16:45.813 } 00:16:45.813 ]' 00:16:45.813 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.070 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.070 11:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.070 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.070 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.070 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.070 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.070 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.329 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:46.329 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.893 11:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.150 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.406 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.407 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.407 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.664 00:16:47.664 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.664 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.664 11:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.259 { 00:16:48.259 "cntlid": 113, 00:16:48.259 "qid": 0, 00:16:48.259 "state": "enabled", 00:16:48.259 "thread": "nvmf_tgt_poll_group_000", 00:16:48.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:48.259 "listen_address": { 00:16:48.259 "trtype": "TCP", 00:16:48.259 "adrfam": "IPv4", 00:16:48.259 "traddr": "10.0.0.2", 00:16:48.259 "trsvcid": "4420" 00:16:48.259 }, 00:16:48.259 "peer_address": { 00:16:48.259 "trtype": "TCP", 00:16:48.259 "adrfam": "IPv4", 00:16:48.259 "traddr": "10.0.0.1", 00:16:48.259 "trsvcid": "56396" 00:16:48.259 }, 00:16:48.259 "auth": { 00:16:48.259 "state": "completed", 00:16:48.259 "digest": "sha512", 00:16:48.259 "dhgroup": "ffdhe3072" 00:16:48.259 } 00:16:48.259 } 00:16:48.259 ]' 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.259 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.515 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:48.515 11:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:49.446 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.446 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:49.446 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.446 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.446 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.446 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.446 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.446 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.703 11:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.960 00:16:49.960 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.960 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.960 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.218 { 00:16:50.218 "cntlid": 115, 00:16:50.218 "qid": 0, 00:16:50.218 "state": "enabled", 00:16:50.218 "thread": "nvmf_tgt_poll_group_000", 00:16:50.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:50.218 "listen_address": { 00:16:50.218 "trtype": "TCP", 00:16:50.218 "adrfam": "IPv4", 00:16:50.218 "traddr": "10.0.0.2", 00:16:50.218 "trsvcid": "4420" 00:16:50.218 }, 00:16:50.218 "peer_address": { 00:16:50.218 "trtype": "TCP", 00:16:50.218 "adrfam": "IPv4", 00:16:50.218 "traddr": "10.0.0.1", 00:16:50.218 "trsvcid": "56410" 00:16:50.218 }, 00:16:50.218 "auth": { 00:16:50.218 "state": "completed", 00:16:50.218 "digest": "sha512", 00:16:50.218 "dhgroup": "ffdhe3072" 00:16:50.218 } 00:16:50.218 } 00:16:50.218 ]' 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.218 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.476 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.476 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.476 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.476 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.476 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.733 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:50.733 11:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:51.300 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.300 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:51.300 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.300 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.300 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.300 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.300 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.300 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.558 11:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.126 00:16:52.126 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.126 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.126 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.126 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.126 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.126 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.126 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.384 { 00:16:52.384 "cntlid": 117, 00:16:52.384 "qid": 0, 00:16:52.384 "state": "enabled", 00:16:52.384 "thread": "nvmf_tgt_poll_group_000", 00:16:52.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:52.384 "listen_address": { 00:16:52.384 "trtype": "TCP", 00:16:52.384 "adrfam": "IPv4", 00:16:52.384 "traddr": "10.0.0.2", 00:16:52.384 "trsvcid": "4420" 00:16:52.384 }, 00:16:52.384 "peer_address": { 00:16:52.384 "trtype": "TCP", 00:16:52.384 "adrfam": "IPv4", 00:16:52.384 "traddr": "10.0.0.1", 00:16:52.384 "trsvcid": "56440" 00:16:52.384 }, 00:16:52.384 "auth": { 00:16:52.384 "state": "completed", 00:16:52.384 "digest": "sha512", 00:16:52.384 "dhgroup": "ffdhe3072" 00:16:52.384 } 00:16:52.384 } 00:16:52.384 ]' 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.384 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.641 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:52.641 11:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:16:53.206 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.206 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:53.206 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.206 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.206 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.206 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.206 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.206 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.464 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.030 00:16:54.030 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.030 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.030 11:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.030 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.030 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.030 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.030 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.288 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.288 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.288 { 00:16:54.288 "cntlid": 119, 00:16:54.288 "qid": 0, 00:16:54.288 "state": "enabled", 00:16:54.288 "thread": "nvmf_tgt_poll_group_000", 00:16:54.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:54.288 "listen_address": { 00:16:54.288 "trtype": "TCP", 00:16:54.288 "adrfam": "IPv4", 00:16:54.288 "traddr": "10.0.0.2", 00:16:54.288 "trsvcid": "4420" 00:16:54.288 }, 00:16:54.288 "peer_address": { 00:16:54.288 "trtype": "TCP", 00:16:54.288 "adrfam": "IPv4", 00:16:54.288 "traddr": "10.0.0.1", 00:16:54.288 "trsvcid": "56462" 00:16:54.288 }, 00:16:54.288 "auth": { 00:16:54.288 "state": "completed", 00:16:54.288 "digest": "sha512", 00:16:54.288 "dhgroup": "ffdhe3072" 00:16:54.288 } 00:16:54.288 } 00:16:54.288 ]' 00:16:54.288 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.288 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.288 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.288 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.289 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.289 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.289 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.289 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.547 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:54.547 11:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:16:55.113 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.113 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:55.113 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.114 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.114 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.114 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.114 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.114 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.114 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.372 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.630 00:16:55.630 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.630 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.630 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.888 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.888 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.888 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.888 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.888 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.888 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.888 { 00:16:55.888 "cntlid": 121, 00:16:55.888 "qid": 0, 00:16:55.888 "state": "enabled", 00:16:55.888 "thread": "nvmf_tgt_poll_group_000", 00:16:55.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:55.888 "listen_address": { 00:16:55.888 "trtype": "TCP", 00:16:55.888 "adrfam": "IPv4", 00:16:55.888 "traddr": "10.0.0.2", 00:16:55.888 "trsvcid": "4420" 00:16:55.888 }, 00:16:55.888 "peer_address": { 00:16:55.888 "trtype": "TCP", 00:16:55.888 "adrfam": "IPv4", 00:16:55.888 "traddr": "10.0.0.1", 00:16:55.888 "trsvcid": "56632" 00:16:55.888 }, 00:16:55.888 "auth": { 00:16:55.888 "state": "completed", 00:16:55.888 "digest": "sha512", 00:16:55.888 "dhgroup": "ffdhe4096" 00:16:55.888 } 00:16:55.888 } 00:16:55.888 ]' 00:16:55.888 11:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.888 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.888 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.888 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.146 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.146 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.146 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.146 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.405 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:56.405 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:16:56.971 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.971 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:56.971 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.971 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.971 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.971 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.971 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.971 11:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.228 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:57.228 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.228 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.228 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.228 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.228 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.228 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.228 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.229 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.229 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.229 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.229 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.229 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.487 00:16:57.487 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.487 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.487 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.744 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.744 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.744 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.744 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.744 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.744 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.744 { 00:16:57.744 "cntlid": 123, 00:16:57.744 "qid": 0, 00:16:57.744 "state": "enabled", 00:16:57.744 "thread": "nvmf_tgt_poll_group_000", 00:16:57.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:57.744 "listen_address": { 00:16:57.744 "trtype": "TCP", 00:16:57.744 "adrfam": "IPv4", 00:16:57.744 "traddr": "10.0.0.2", 00:16:57.744 "trsvcid": "4420" 00:16:57.744 }, 00:16:57.744 "peer_address": { 00:16:57.744 "trtype": "TCP", 00:16:57.744 "adrfam": "IPv4", 00:16:57.744 "traddr": "10.0.0.1", 00:16:57.744 "trsvcid": "56660" 00:16:57.744 }, 00:16:57.744 "auth": { 00:16:57.744 "state": "completed", 00:16:57.744 "digest": "sha512", 00:16:57.744 "dhgroup": "ffdhe4096" 00:16:57.744 } 00:16:57.744 } 00:16:57.744 ]' 00:16:57.744 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.010 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.010 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.010 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.010 11:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.010 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.010 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.010 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.299 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:58.299 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:16:58.865 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.865 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:16:58.865 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.865 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.865 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.865 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.865 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.865 11:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.124 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.688 00:16:59.688 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.688 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.688 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.947 { 00:16:59.947 "cntlid": 125, 00:16:59.947 "qid": 0, 00:16:59.947 "state": "enabled", 00:16:59.947 "thread": "nvmf_tgt_poll_group_000", 00:16:59.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:16:59.947 "listen_address": { 00:16:59.947 "trtype": "TCP", 00:16:59.947 "adrfam": "IPv4", 00:16:59.947 "traddr": "10.0.0.2", 00:16:59.947 "trsvcid": "4420" 00:16:59.947 }, 00:16:59.947 "peer_address": { 00:16:59.947 "trtype": "TCP", 00:16:59.947 "adrfam": "IPv4", 00:16:59.947 "traddr": "10.0.0.1", 00:16:59.947 "trsvcid": "56684" 00:16:59.947 }, 00:16:59.947 "auth": { 00:16:59.947 "state": "completed", 00:16:59.947 "digest": "sha512", 00:16:59.947 "dhgroup": "ffdhe4096" 00:16:59.947 } 00:16:59.947 } 00:16:59.947 ]' 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.947 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.948 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.948 11:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.948 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.948 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.948 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.206 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:17:00.206 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:17:00.770 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.770 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:00.770 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.770 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.770 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.770 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.770 11:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.028 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.594 00:17:01.594 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.594 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.594 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.851 { 00:17:01.851 "cntlid": 127, 00:17:01.851 "qid": 0, 00:17:01.851 "state": "enabled", 00:17:01.851 "thread": "nvmf_tgt_poll_group_000", 00:17:01.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:01.851 "listen_address": { 00:17:01.851 "trtype": "TCP", 00:17:01.851 "adrfam": "IPv4", 00:17:01.851 "traddr": "10.0.0.2", 00:17:01.851 "trsvcid": "4420" 00:17:01.851 }, 00:17:01.851 "peer_address": { 00:17:01.851 "trtype": "TCP", 00:17:01.851 "adrfam": "IPv4", 00:17:01.851 "traddr": "10.0.0.1", 00:17:01.851 "trsvcid": "56716" 00:17:01.851 }, 00:17:01.851 "auth": { 00:17:01.851 "state": "completed", 00:17:01.851 "digest": "sha512", 00:17:01.851 "dhgroup": "ffdhe4096" 00:17:01.851 } 00:17:01.851 } 00:17:01.851 ]' 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.851 11:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.110 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:02.110 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:02.677 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.677 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:02.677 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.677 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.677 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.677 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.677 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.678 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.678 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.936 11:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.504 00:17:03.504 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.504 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.504 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.504 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.504 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.504 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.504 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.763 { 00:17:03.763 "cntlid": 129, 00:17:03.763 "qid": 0, 00:17:03.763 "state": "enabled", 00:17:03.763 "thread": "nvmf_tgt_poll_group_000", 00:17:03.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:03.763 "listen_address": { 00:17:03.763 "trtype": "TCP", 00:17:03.763 "adrfam": "IPv4", 00:17:03.763 "traddr": "10.0.0.2", 00:17:03.763 "trsvcid": "4420" 00:17:03.763 }, 00:17:03.763 "peer_address": { 00:17:03.763 "trtype": "TCP", 00:17:03.763 "adrfam": "IPv4", 00:17:03.763 "traddr": "10.0.0.1", 00:17:03.763 "trsvcid": "56750" 00:17:03.763 }, 00:17:03.763 "auth": { 00:17:03.763 "state": "completed", 00:17:03.763 "digest": "sha512", 00:17:03.763 "dhgroup": "ffdhe6144" 00:17:03.763 } 00:17:03.763 } 00:17:03.763 ]' 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.763 11:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.022 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:17:04.022 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:17:04.591 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.591 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:04.591 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.591 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.591 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.591 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.591 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.591 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.849 11:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.416 00:17:05.416 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.416 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.416 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.416 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.416 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.416 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.416 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.673 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.673 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.673 { 00:17:05.673 "cntlid": 131, 00:17:05.673 "qid": 0, 00:17:05.673 "state": "enabled", 00:17:05.673 "thread": "nvmf_tgt_poll_group_000", 00:17:05.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:05.674 "listen_address": { 00:17:05.674 "trtype": "TCP", 00:17:05.674 "adrfam": "IPv4", 00:17:05.674 "traddr": "10.0.0.2", 00:17:05.674 "trsvcid": "4420" 00:17:05.674 }, 00:17:05.674 "peer_address": { 00:17:05.674 "trtype": "TCP", 00:17:05.674 "adrfam": "IPv4", 00:17:05.674 "traddr": "10.0.0.1", 00:17:05.674 "trsvcid": "46152" 00:17:05.674 }, 00:17:05.674 "auth": { 00:17:05.674 "state": "completed", 00:17:05.674 "digest": "sha512", 00:17:05.674 "dhgroup": "ffdhe6144" 00:17:05.674 } 00:17:05.674 } 00:17:05.674 ]' 00:17:05.674 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.674 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.674 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.674 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.674 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.674 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.674 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.674 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.932 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:17:05.932 11:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:17:06.497 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.497 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:06.497 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.497 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.497 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.497 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.497 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.497 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:06.754 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:06.754 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.754 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:06.754 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:06.754 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:06.754 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.754 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.754 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.755 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.755 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.755 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.755 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.755 11:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.320 00:17:07.320 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.320 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.320 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.578 { 00:17:07.578 "cntlid": 133, 00:17:07.578 "qid": 0, 00:17:07.578 "state": "enabled", 00:17:07.578 "thread": "nvmf_tgt_poll_group_000", 00:17:07.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:07.578 "listen_address": { 00:17:07.578 "trtype": "TCP", 00:17:07.578 "adrfam": "IPv4", 00:17:07.578 "traddr": "10.0.0.2", 00:17:07.578 "trsvcid": "4420" 00:17:07.578 }, 00:17:07.578 "peer_address": { 00:17:07.578 "trtype": "TCP", 00:17:07.578 "adrfam": "IPv4", 00:17:07.578 "traddr": "10.0.0.1", 00:17:07.578 "trsvcid": "46174" 00:17:07.578 }, 00:17:07.578 "auth": { 00:17:07.578 "state": "completed", 00:17:07.578 "digest": "sha512", 00:17:07.578 "dhgroup": "ffdhe6144" 00:17:07.578 } 00:17:07.578 } 00:17:07.578 ]' 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.578 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.895 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:17:07.895 11:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:17:08.478 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.478 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:08.478 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.478 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.478 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.478 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.478 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.478 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.735 11:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.993 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.251 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.251 { 00:17:09.251 "cntlid": 135, 00:17:09.251 "qid": 0, 00:17:09.251 "state": "enabled", 00:17:09.251 "thread": "nvmf_tgt_poll_group_000", 00:17:09.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:09.251 "listen_address": { 00:17:09.251 "trtype": "TCP", 00:17:09.251 "adrfam": "IPv4", 00:17:09.251 "traddr": "10.0.0.2", 00:17:09.251 "trsvcid": "4420" 00:17:09.251 }, 00:17:09.251 "peer_address": { 00:17:09.251 "trtype": "TCP", 00:17:09.251 "adrfam": "IPv4", 00:17:09.251 "traddr": "10.0.0.1", 00:17:09.251 "trsvcid": "46196" 00:17:09.251 }, 00:17:09.251 "auth": { 00:17:09.251 "state": "completed", 00:17:09.251 "digest": "sha512", 00:17:09.251 "dhgroup": "ffdhe6144" 00:17:09.251 } 00:17:09.251 } 00:17:09.251 ]' 00:17:09.508 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.508 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.508 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.508 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.508 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.508 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.508 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.508 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.765 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:09.765 11:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.333 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.591 11:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.158 00:17:11.158 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.158 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.158 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.417 { 00:17:11.417 "cntlid": 137, 00:17:11.417 "qid": 0, 00:17:11.417 "state": "enabled", 00:17:11.417 "thread": "nvmf_tgt_poll_group_000", 00:17:11.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:11.417 "listen_address": { 00:17:11.417 "trtype": "TCP", 00:17:11.417 "adrfam": "IPv4", 00:17:11.417 "traddr": "10.0.0.2", 00:17:11.417 "trsvcid": "4420" 00:17:11.417 }, 00:17:11.417 "peer_address": { 00:17:11.417 "trtype": "TCP", 00:17:11.417 "adrfam": "IPv4", 00:17:11.417 "traddr": "10.0.0.1", 00:17:11.417 "trsvcid": "46222" 00:17:11.417 }, 00:17:11.417 "auth": { 00:17:11.417 "state": "completed", 00:17:11.417 "digest": "sha512", 00:17:11.417 "dhgroup": "ffdhe8192" 00:17:11.417 } 00:17:11.417 } 00:17:11.417 ]' 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.417 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.675 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:17:11.675 11:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:17:12.241 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.241 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:12.241 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.241 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.241 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.241 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.241 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.241 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.499 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:12.499 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.499 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.500 11:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.132 00:17:13.132 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.132 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.132 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.417 { 00:17:13.417 "cntlid": 139, 00:17:13.417 "qid": 0, 00:17:13.417 "state": "enabled", 00:17:13.417 "thread": "nvmf_tgt_poll_group_000", 00:17:13.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:13.417 "listen_address": { 00:17:13.417 "trtype": "TCP", 00:17:13.417 "adrfam": "IPv4", 00:17:13.417 "traddr": "10.0.0.2", 00:17:13.417 "trsvcid": "4420" 00:17:13.417 }, 00:17:13.417 "peer_address": { 00:17:13.417 "trtype": "TCP", 00:17:13.417 "adrfam": "IPv4", 00:17:13.417 "traddr": "10.0.0.1", 00:17:13.417 "trsvcid": "46242" 00:17:13.417 }, 00:17:13.417 "auth": { 00:17:13.417 "state": "completed", 00:17:13.417 "digest": "sha512", 00:17:13.417 "dhgroup": "ffdhe8192" 00:17:13.417 } 00:17:13.417 } 00:17:13.417 ]' 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.417 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.677 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.677 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.677 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.677 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:17:13.677 11:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: --dhchap-ctrl-secret DHHC-1:02:YTMyYTBjMzJjYmM0ZGE4YjU1YjViMDM4MGI3ZmJmOTY3NDg1YmZkMGU5MGRmOTE3AMVFiw==: 00:17:14.244 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.503 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:14.503 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.503 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.503 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.503 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.503 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.503 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.761 11:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.328 00:17:15.328 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.328 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.328 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.586 { 00:17:15.586 "cntlid": 141, 00:17:15.586 "qid": 0, 00:17:15.586 "state": "enabled", 00:17:15.586 "thread": "nvmf_tgt_poll_group_000", 00:17:15.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:15.586 "listen_address": { 00:17:15.586 "trtype": "TCP", 00:17:15.586 "adrfam": "IPv4", 00:17:15.586 "traddr": "10.0.0.2", 00:17:15.586 "trsvcid": "4420" 00:17:15.586 }, 00:17:15.586 "peer_address": { 00:17:15.586 "trtype": "TCP", 00:17:15.586 "adrfam": "IPv4", 00:17:15.586 "traddr": "10.0.0.1", 00:17:15.586 "trsvcid": "41494" 00:17:15.586 }, 00:17:15.586 "auth": { 00:17:15.586 "state": "completed", 00:17:15.586 "digest": "sha512", 00:17:15.586 "dhgroup": "ffdhe8192" 00:17:15.586 } 00:17:15.586 } 00:17:15.586 ]' 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.586 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.845 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:17:15.845 11:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:01:ZmVlMWVhMDk0YmM5MzU1MTMyNjZjYTdhOWIwYmVjMWXSct2+: 00:17:16.411 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.411 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:16.411 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.411 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.411 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.411 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.411 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.411 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.669 11:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.293 00:17:17.293 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.293 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.293 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.562 { 00:17:17.562 "cntlid": 143, 00:17:17.562 "qid": 0, 00:17:17.562 "state": "enabled", 00:17:17.562 "thread": "nvmf_tgt_poll_group_000", 00:17:17.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:17.562 "listen_address": { 00:17:17.562 "trtype": "TCP", 00:17:17.562 "adrfam": "IPv4", 00:17:17.562 "traddr": "10.0.0.2", 00:17:17.562 "trsvcid": "4420" 00:17:17.562 }, 00:17:17.562 "peer_address": { 00:17:17.562 "trtype": "TCP", 00:17:17.562 "adrfam": "IPv4", 00:17:17.562 "traddr": "10.0.0.1", 00:17:17.562 "trsvcid": "41510" 00:17:17.562 }, 00:17:17.562 "auth": { 00:17:17.562 "state": "completed", 00:17:17.562 "digest": "sha512", 00:17:17.562 "dhgroup": "ffdhe8192" 00:17:17.562 } 00:17:17.562 } 00:17:17.562 ]' 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.562 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.820 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:17.820 11:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:18.391 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:18.392 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:18.392 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.649 11:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.214 00:17:19.214 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.214 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.214 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.473 { 00:17:19.473 "cntlid": 145, 00:17:19.473 "qid": 0, 00:17:19.473 "state": "enabled", 00:17:19.473 "thread": "nvmf_tgt_poll_group_000", 00:17:19.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:19.473 "listen_address": { 00:17:19.473 "trtype": "TCP", 00:17:19.473 "adrfam": "IPv4", 00:17:19.473 "traddr": "10.0.0.2", 00:17:19.473 "trsvcid": "4420" 00:17:19.473 }, 00:17:19.473 "peer_address": { 00:17:19.473 "trtype": "TCP", 00:17:19.473 "adrfam": "IPv4", 00:17:19.473 "traddr": "10.0.0.1", 00:17:19.473 "trsvcid": "41538" 00:17:19.473 }, 00:17:19.473 "auth": { 00:17:19.473 "state": "completed", 00:17:19.473 "digest": "sha512", 00:17:19.473 "dhgroup": "ffdhe8192" 00:17:19.473 } 00:17:19.473 } 00:17:19.473 ]' 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.473 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.731 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:17:19.731 11:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:00:ODA3OWRlNTRhNzA3NmVjZDlhMjI4NGMzNGVhYTllOWRlZDY3MjExZTI0ZmQ4YmYxBDXgSg==: --dhchap-ctrl-secret DHHC-1:03:MzY0MTJlNWMzNWM1OTM1YjI3NmU2ZjUxZjkyYjAwNjRjNzdmNmRlZWQ3MGRhNWU1ZjU3NGJjZWFlZDEyYTU4NSXsvo0=: 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:20.294 11:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:20.858 request: 00:17:20.858 { 00:17:20.858 "name": "nvme0", 00:17:20.858 "trtype": "tcp", 00:17:20.858 "traddr": "10.0.0.2", 00:17:20.858 "adrfam": "ipv4", 00:17:20.858 "trsvcid": "4420", 00:17:20.858 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:20.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:20.858 "prchk_reftag": false, 00:17:20.858 "prchk_guard": false, 00:17:20.858 "hdgst": false, 00:17:20.858 "ddgst": false, 00:17:20.858 "dhchap_key": "key2", 00:17:20.858 "allow_unrecognized_csi": false, 00:17:20.858 "method": "bdev_nvme_attach_controller", 00:17:20.858 "req_id": 1 00:17:20.858 } 00:17:20.858 Got JSON-RPC error response 00:17:20.858 response: 00:17:20.858 { 00:17:20.858 "code": -5, 00:17:20.858 "message": "Input/output error" 00:17:20.858 } 00:17:20.858 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:20.858 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.858 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.858 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.858 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:20.858 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.858 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.115 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.116 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:21.680 request: 00:17:21.680 { 00:17:21.680 "name": "nvme0", 00:17:21.680 "trtype": "tcp", 00:17:21.680 "traddr": "10.0.0.2", 00:17:21.680 "adrfam": "ipv4", 00:17:21.680 "trsvcid": "4420", 00:17:21.680 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:21.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:21.680 "prchk_reftag": false, 00:17:21.680 "prchk_guard": false, 00:17:21.680 "hdgst": false, 00:17:21.680 "ddgst": false, 00:17:21.680 "dhchap_key": "key1", 00:17:21.680 "dhchap_ctrlr_key": "ckey2", 00:17:21.680 "allow_unrecognized_csi": false, 00:17:21.680 "method": "bdev_nvme_attach_controller", 00:17:21.680 "req_id": 1 00:17:21.680 } 00:17:21.680 Got JSON-RPC error response 00:17:21.680 response: 00:17:21.680 { 00:17:21.680 "code": -5, 00:17:21.680 "message": "Input/output error" 00:17:21.680 } 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.680 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:21.681 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.681 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:21.681 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.681 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:21.681 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.681 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.681 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.681 11:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.250 request: 00:17:22.250 { 00:17:22.250 "name": "nvme0", 00:17:22.250 "trtype": "tcp", 00:17:22.250 "traddr": "10.0.0.2", 00:17:22.250 "adrfam": "ipv4", 00:17:22.250 "trsvcid": "4420", 00:17:22.250 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:22.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:22.250 "prchk_reftag": false, 00:17:22.250 "prchk_guard": false, 00:17:22.250 "hdgst": false, 00:17:22.250 "ddgst": false, 00:17:22.250 "dhchap_key": "key1", 00:17:22.250 "dhchap_ctrlr_key": "ckey1", 00:17:22.250 "allow_unrecognized_csi": false, 00:17:22.250 "method": "bdev_nvme_attach_controller", 00:17:22.250 "req_id": 1 00:17:22.250 } 00:17:22.250 Got JSON-RPC error response 00:17:22.250 response: 00:17:22.250 { 00:17:22.250 "code": -5, 00:17:22.250 "message": "Input/output error" 00:17:22.250 } 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67250 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67250 ']' 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67250 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67250 00:17:22.250 killing process with pid 67250 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67250' 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67250 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67250 00:17:22.250 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=70161 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 70161 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70161 ']' 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.531 11:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:23.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70161 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70161 ']' 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.466 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.724 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.724 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:23.724 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:23.724 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.724 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.724 null0 00:17:23.724 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.724 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:23.724 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.vOg 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.VSF ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.VSF 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kOS 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.iZO ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iZO 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.mLM 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.rKO ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rKO 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tME 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.725 11:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.657 nvme0n1 00:17:24.657 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.657 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.657 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.915 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.915 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.915 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.915 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.915 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.915 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.915 { 00:17:24.915 "cntlid": 1, 00:17:24.915 "qid": 0, 00:17:24.915 "state": "enabled", 00:17:24.915 "thread": "nvmf_tgt_poll_group_000", 00:17:24.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:24.915 "listen_address": { 00:17:24.915 "trtype": "TCP", 00:17:24.915 "adrfam": "IPv4", 00:17:24.915 "traddr": "10.0.0.2", 00:17:24.915 "trsvcid": "4420" 00:17:24.915 }, 00:17:24.915 "peer_address": { 00:17:24.915 "trtype": "TCP", 00:17:24.915 "adrfam": "IPv4", 00:17:24.915 "traddr": "10.0.0.1", 00:17:24.915 "trsvcid": "52192" 00:17:24.915 }, 00:17:24.915 "auth": { 00:17:24.915 "state": "completed", 00:17:24.915 "digest": "sha512", 00:17:24.915 "dhgroup": "ffdhe8192" 00:17:24.915 } 00:17:24.915 } 00:17:24.915 ]' 00:17:24.915 11:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.915 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.915 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.915 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.915 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.173 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.173 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.173 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.430 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:25.430 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key3 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:25.994 11:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.251 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.509 request: 00:17:26.509 { 00:17:26.509 "name": "nvme0", 00:17:26.509 "trtype": "tcp", 00:17:26.509 "traddr": "10.0.0.2", 00:17:26.509 "adrfam": "ipv4", 00:17:26.509 "trsvcid": "4420", 00:17:26.509 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:26.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:26.509 "prchk_reftag": false, 00:17:26.509 "prchk_guard": false, 00:17:26.509 "hdgst": false, 00:17:26.509 "ddgst": false, 00:17:26.509 "dhchap_key": "key3", 00:17:26.509 "allow_unrecognized_csi": false, 00:17:26.509 "method": "bdev_nvme_attach_controller", 00:17:26.509 "req_id": 1 00:17:26.509 } 00:17:26.509 Got JSON-RPC error response 00:17:26.509 response: 00:17:26.509 { 00:17:26.509 "code": -5, 00:17:26.509 "message": "Input/output error" 00:17:26.509 } 00:17:26.509 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:26.509 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.509 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.509 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.509 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:26.509 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:26.509 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:26.509 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.767 11:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.024 request: 00:17:27.024 { 00:17:27.024 "name": "nvme0", 00:17:27.024 "trtype": "tcp", 00:17:27.024 "traddr": "10.0.0.2", 00:17:27.024 "adrfam": "ipv4", 00:17:27.024 "trsvcid": "4420", 00:17:27.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:27.024 "prchk_reftag": false, 00:17:27.024 "prchk_guard": false, 00:17:27.024 "hdgst": false, 00:17:27.024 "ddgst": false, 00:17:27.024 "dhchap_key": "key3", 00:17:27.024 "allow_unrecognized_csi": false, 00:17:27.024 "method": "bdev_nvme_attach_controller", 00:17:27.024 "req_id": 1 00:17:27.024 } 00:17:27.024 Got JSON-RPC error response 00:17:27.024 response: 00:17:27.024 { 00:17:27.024 "code": -5, 00:17:27.024 "message": "Input/output error" 00:17:27.024 } 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.024 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.281 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.282 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.540 request: 00:17:27.540 { 00:17:27.540 "name": "nvme0", 00:17:27.540 "trtype": "tcp", 00:17:27.540 "traddr": "10.0.0.2", 00:17:27.540 "adrfam": "ipv4", 00:17:27.540 "trsvcid": "4420", 00:17:27.540 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:27.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:27.540 "prchk_reftag": false, 00:17:27.540 "prchk_guard": false, 00:17:27.540 "hdgst": false, 00:17:27.540 "ddgst": false, 00:17:27.540 "dhchap_key": "key0", 00:17:27.540 "dhchap_ctrlr_key": "key1", 00:17:27.540 "allow_unrecognized_csi": false, 00:17:27.540 "method": "bdev_nvme_attach_controller", 00:17:27.540 "req_id": 1 00:17:27.540 } 00:17:27.540 Got JSON-RPC error response 00:17:27.540 response: 00:17:27.540 { 00:17:27.540 "code": -5, 00:17:27.540 "message": "Input/output error" 00:17:27.540 } 00:17:27.540 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:27.540 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.540 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.540 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.540 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:27.540 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:27.540 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:28.104 nvme0n1 00:17:28.104 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:28.104 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:28.104 11:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.104 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.104 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.104 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.360 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 00:17:28.360 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.360 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.360 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.360 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:28.360 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:28.360 11:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:29.295 nvme0n1 00:17:29.295 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:29.295 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:29.295 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.553 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.553 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:29.553 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.553 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.553 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.553 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:29.553 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:29.553 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.812 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.812 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:29.812 11:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -l 0 --dhchap-secret DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: --dhchap-ctrl-secret DHHC-1:03:MWNiNzM5OTgzMWJlNDMyNjczZTc2Zjk0MWU3NDRmYjFhYWZmODY3MmUwZWEwYjE5YTdhY2YyMTg2YTRiMjQ1Yp/xACU=: 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.381 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:30.640 11:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:31.208 request: 00:17:31.208 { 00:17:31.208 "name": "nvme0", 00:17:31.208 "trtype": "tcp", 00:17:31.208 "traddr": "10.0.0.2", 00:17:31.208 "adrfam": "ipv4", 00:17:31.208 "trsvcid": "4420", 00:17:31.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:31.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11", 00:17:31.208 "prchk_reftag": false, 00:17:31.208 "prchk_guard": false, 00:17:31.208 "hdgst": false, 00:17:31.208 "ddgst": false, 00:17:31.208 "dhchap_key": "key1", 00:17:31.208 "allow_unrecognized_csi": false, 00:17:31.208 "method": "bdev_nvme_attach_controller", 00:17:31.208 "req_id": 1 00:17:31.208 } 00:17:31.208 Got JSON-RPC error response 00:17:31.208 response: 00:17:31.208 { 00:17:31.208 "code": -5, 00:17:31.208 "message": "Input/output error" 00:17:31.208 } 00:17:31.208 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:31.208 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:31.208 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:31.208 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:31.208 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:31.208 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:31.208 11:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:32.146 nvme0n1 00:17:32.146 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:32.146 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:32.146 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.406 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.406 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.406 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.665 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:32.665 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.665 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.665 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.665 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:32.665 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:32.665 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:32.964 nvme0n1 00:17:32.964 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:32.964 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:32.964 11:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.229 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.229 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.229 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: '' 2s 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: ]] 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGQ0OGEwOTc1YmM5YWI2MjMwNjgyMzZlNWM1MDgwYzTs/+b+: 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:33.488 11:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: 2s 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: ]] 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NzEwOWUwYjJiMzI4NzQ5MTc1ZDliNjM1NjQ4YzQ3NmMzZTJlMDc3MTE2NmRjMjU0w3XUSg==: 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:35.394 11:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:37.936 11:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:38.502 nvme0n1 00:17:38.502 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:38.502 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.502 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.502 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.502 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:38.502 11:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:39.092 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:39.092 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:39.092 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.352 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.352 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:39.352 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.352 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.352 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.352 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:39.352 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:39.612 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:39.612 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.612 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:39.872 11:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:39.872 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:40.441 request: 00:17:40.441 { 00:17:40.441 "name": "nvme0", 00:17:40.441 "dhchap_key": "key1", 00:17:40.441 "dhchap_ctrlr_key": "key3", 00:17:40.441 "method": "bdev_nvme_set_keys", 00:17:40.441 "req_id": 1 00:17:40.441 } 00:17:40.441 Got JSON-RPC error response 00:17:40.441 response: 00:17:40.441 { 00:17:40.441 "code": -13, 00:17:40.441 "message": "Permission denied" 00:17:40.441 } 00:17:40.700 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:40.700 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.700 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.700 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.700 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:40.700 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:40.700 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.959 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:40.959 11:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:41.896 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:41.896 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:41.896 11:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.156 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:42.156 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:42.156 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.156 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.156 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.156 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.156 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:42.156 11:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:43.093 nvme0n1 00:17:43.093 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:43.093 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.093 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.093 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.093 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.093 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.094 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.094 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:43.094 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.094 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:43.094 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.094 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.094 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:43.662 request: 00:17:43.662 { 00:17:43.662 "name": "nvme0", 00:17:43.662 "dhchap_key": "key2", 00:17:43.662 "dhchap_ctrlr_key": "key0", 00:17:43.662 "method": "bdev_nvme_set_keys", 00:17:43.662 "req_id": 1 00:17:43.662 } 00:17:43.662 Got JSON-RPC error response 00:17:43.662 response: 00:17:43.662 { 00:17:43.662 "code": -13, 00:17:43.662 "message": "Permission denied" 00:17:43.662 } 00:17:43.662 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.662 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.662 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.662 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.662 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:43.662 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.662 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:43.921 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:43.921 11:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:44.883 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:44.883 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.883 11:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:45.171 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:45.171 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67282 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67282 ']' 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67282 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67282 00:17:45.172 killing process with pid 67282 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67282' 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67282 00:17:45.172 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67282 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:45.431 rmmod nvme_tcp 00:17:45.431 rmmod nvme_fabrics 00:17:45.431 rmmod nvme_keyring 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 70161 ']' 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 70161 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70161 ']' 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70161 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.431 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70161 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70161' 00:17:45.691 killing process with pid 70161 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70161 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70161 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@254 -- # local dev 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:45.691 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # continue 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # continue 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@274 -- # iptr 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-save 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-restore 00:17:45.950 11:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.vOg /tmp/spdk.key-sha256.kOS /tmp/spdk.key-sha384.mLM /tmp/spdk.key-sha512.tME /tmp/spdk.key-sha512.VSF /tmp/spdk.key-sha384.iZO /tmp/spdk.key-sha256.rKO '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:17:45.950 00:17:45.950 real 2m54.254s 00:17:45.950 user 6m43.187s 00:17:45.950 sys 0m37.251s 00:17:45.950 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.950 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.950 ************************************ 00:17:45.950 END TEST nvmf_auth_target 00:17:45.951 ************************************ 00:17:45.951 11:01:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:45.951 11:01:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:45.951 11:01:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:45.951 11:01:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.951 11:01:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.951 ************************************ 00:17:45.951 START TEST nvmf_bdevio_no_huge 00:17:45.951 ************************************ 00:17:45.951 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:46.212 * Looking for test storage... 00:17:46.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:46.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.212 --rc genhtml_branch_coverage=1 00:17:46.212 --rc genhtml_function_coverage=1 00:17:46.212 --rc genhtml_legend=1 00:17:46.212 --rc geninfo_all_blocks=1 00:17:46.212 --rc geninfo_unexecuted_blocks=1 00:17:46.212 00:17:46.212 ' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:46.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.212 --rc genhtml_branch_coverage=1 00:17:46.212 --rc genhtml_function_coverage=1 00:17:46.212 --rc genhtml_legend=1 00:17:46.212 --rc geninfo_all_blocks=1 00:17:46.212 --rc geninfo_unexecuted_blocks=1 00:17:46.212 00:17:46.212 ' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:46.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.212 --rc genhtml_branch_coverage=1 00:17:46.212 --rc genhtml_function_coverage=1 00:17:46.212 --rc genhtml_legend=1 00:17:46.212 --rc geninfo_all_blocks=1 00:17:46.212 --rc geninfo_unexecuted_blocks=1 00:17:46.212 00:17:46.212 ' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:46.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:46.212 --rc genhtml_branch_coverage=1 00:17:46.212 --rc genhtml_function_coverage=1 00:17:46.212 --rc genhtml_legend=1 00:17:46.212 --rc geninfo_all_blocks=1 00:17:46.212 --rc geninfo_unexecuted_blocks=1 00:17:46.212 00:17:46.212 ' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:46.212 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@223 -- # create_target_ns 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:46.212 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # return 0 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.472 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up target0 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:46.473 10.0.0.1 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:46.473 10.0.0.2 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:46.473 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up target1 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:46.474 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:46.733 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:46.733 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:46.733 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:46.733 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:46.733 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772163 00:17:46.733 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:46.733 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:46.734 10.0.0.3 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772164 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:46.734 10.0.0.4 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:46.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:17:46.734 00:17:46.734 --- 10.0.0.1 ping statistics --- 00:17:46.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.734 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target0 00:17:46.734 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:46.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:46.735 00:17:46.735 --- 10.0.0.2 ping statistics --- 00:17:46.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.735 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:46.735 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.735 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:17:46.735 00:17:46.735 --- 10.0.0.3 ping statistics --- 00:17:46.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.735 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:46.735 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:46.995 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:46.995 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:46.995 00:17:46.995 --- 10.0.0.4 ping statistics --- 00:17:46.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.995 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # return 0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target0 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:46.995 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target1 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target1 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:46.996 11:01:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=70791 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 70791 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70791 ']' 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.996 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:46.996 [2024-12-05 11:01:14.059079] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:17:46.996 [2024-12-05 11:01:14.059175] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:47.254 [2024-12-05 11:01:14.236345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.254 [2024-12-05 11:01:14.303873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.254 [2024-12-05 11:01:14.303945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.254 [2024-12-05 11:01:14.303956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.254 [2024-12-05 11:01:14.303964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.254 [2024-12-05 11:01:14.303972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.254 [2024-12-05 11:01:14.304596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:47.254 [2024-12-05 11:01:14.305201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:47.254 [2024-12-05 11:01:14.305365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:47.254 [2024-12-05 11:01:14.305712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:47.254 [2024-12-05 11:01:14.309794] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.821 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:47.821 [2024-12-05 11:01:14.980314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.087 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.087 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:48.087 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.087 11:01:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.087 Malloc0 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.087 [2024-12-05 11:01:15.028426] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:17:48.087 { 00:17:48.087 "params": { 00:17:48.087 "name": "Nvme$subsystem", 00:17:48.087 "trtype": "$TEST_TRANSPORT", 00:17:48.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:48.087 "adrfam": "ipv4", 00:17:48.087 "trsvcid": "$NVMF_PORT", 00:17:48.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:48.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:48.087 "hdgst": ${hdgst:-false}, 00:17:48.087 "ddgst": ${ddgst:-false} 00:17:48.087 }, 00:17:48.087 "method": "bdev_nvme_attach_controller" 00:17:48.087 } 00:17:48.087 EOF 00:17:48.087 )") 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:17:48.087 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:17:48.087 "params": { 00:17:48.087 "name": "Nvme1", 00:17:48.087 "trtype": "tcp", 00:17:48.087 "traddr": "10.0.0.2", 00:17:48.087 "adrfam": "ipv4", 00:17:48.087 "trsvcid": "4420", 00:17:48.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.087 "hdgst": false, 00:17:48.087 "ddgst": false 00:17:48.087 }, 00:17:48.087 "method": "bdev_nvme_attach_controller" 00:17:48.087 }' 00:17:48.087 [2024-12-05 11:01:15.086306] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:17:48.087 [2024-12-05 11:01:15.086375] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70827 ] 00:17:48.343 [2024-12-05 11:01:15.245448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:48.343 [2024-12-05 11:01:15.316703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.343 [2024-12-05 11:01:15.316806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.343 [2024-12-05 11:01:15.316805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.343 [2024-12-05 11:01:15.329456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:48.615 I/O targets: 00:17:48.615 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:48.615 00:17:48.615 00:17:48.615 CUnit - A unit testing framework for C - Version 2.1-3 00:17:48.615 http://cunit.sourceforge.net/ 00:17:48.615 00:17:48.615 00:17:48.615 Suite: bdevio tests on: Nvme1n1 00:17:48.615 Test: blockdev write read block ...passed 00:17:48.615 Test: blockdev write zeroes read block ...passed 00:17:48.615 Test: blockdev write zeroes read no split ...passed 00:17:48.615 Test: blockdev write zeroes read split ...passed 00:17:48.615 Test: blockdev write zeroes read split partial ...passed 00:17:48.615 Test: blockdev reset ...[2024-12-05 11:01:15.565218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:48.615 [2024-12-05 11:01:15.565428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf72320 (9): Bad file descriptor 00:17:48.615 [2024-12-05 11:01:15.583348] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:48.615 passed 00:17:48.615 Test: blockdev write read 8 blocks ...passed 00:17:48.615 Test: blockdev write read size > 128k ...passed 00:17:48.615 Test: blockdev write read invalid size ...passed 00:17:48.615 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:48.615 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:48.615 Test: blockdev write read max offset ...passed 00:17:48.615 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:48.615 Test: blockdev writev readv 8 blocks ...passed 00:17:48.615 Test: blockdev writev readv 30 x 1block ...passed 00:17:48.615 Test: blockdev writev readv block ...passed 00:17:48.615 Test: blockdev writev readv size > 128k ...passed 00:17:48.615 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:48.615 Test: blockdev comparev and writev ...[2024-12-05 11:01:15.590329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.615 [2024-12-05 11:01:15.590368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.590386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.615 [2024-12-05 11:01:15.590397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.590700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.615 [2024-12-05 11:01:15.590716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.590731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.615 [2024-12-05 11:01:15.590740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.591025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.615 [2024-12-05 11:01:15.591040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.591055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.615 [2024-12-05 11:01:15.591065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.591477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.615 [2024-12-05 11:01:15.591498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.591513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:48.615 [2024-12-05 11:01:15.591523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:48.615 passed 00:17:48.615 Test: blockdev nvme passthru rw ...passed 00:17:48.615 Test: blockdev nvme passthru vendor specific ...[2024-12-05 11:01:15.592309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.615 [2024-12-05 11:01:15.592334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.592412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.615 [2024-12-05 11:01:15.592424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.592510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.615 [2024-12-05 11:01:15.592525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:48.615 [2024-12-05 11:01:15.592608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:48.615 [2024-12-05 11:01:15.592623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:48.615 passed 00:17:48.615 Test: blockdev nvme admin passthru ...passed 00:17:48.615 Test: blockdev copy ...passed 00:17:48.615 00:17:48.615 Run Summary: Type Total Ran Passed Failed Inactive 00:17:48.615 suites 1 1 n/a 0 0 00:17:48.615 tests 23 23 23 0 0 00:17:48.615 asserts 152 152 152 0 n/a 00:17:48.615 00:17:48.615 Elapsed time = 0.165 seconds 00:17:48.879 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.879 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.879 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.879 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.879 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:48.879 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:48.879 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:48.879 11:01:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:17:48.879 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:48.879 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:17:48.879 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:48.879 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:48.879 rmmod nvme_tcp 00:17:49.152 rmmod nvme_fabrics 00:17:49.152 rmmod nvme_keyring 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 70791 ']' 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 70791 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70791 ']' 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70791 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70791 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:49.152 killing process with pid 70791 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70791' 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70791 00:17:49.152 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70791 00:17:49.411 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:49.411 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:17:49.411 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@254 -- # local dev 00:17:49.411 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:49.411 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:49.411 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:49.411 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # continue 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # continue 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@274 -- # iptr 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-save 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-restore 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:49.670 00:17:49.670 real 0m3.636s 00:17:49.670 user 0m10.192s 00:17:49.670 sys 0m1.719s 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.670 ************************************ 00:17:49.670 END TEST nvmf_bdevio_no_huge 00:17:49.670 ************************************ 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:49.670 ************************************ 00:17:49.670 START TEST nvmf_tls 00:17:49.670 ************************************ 00:17:49.670 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:49.930 * Looking for test storage... 00:17:49.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.930 --rc genhtml_branch_coverage=1 00:17:49.930 --rc genhtml_function_coverage=1 00:17:49.930 --rc genhtml_legend=1 00:17:49.930 --rc geninfo_all_blocks=1 00:17:49.930 --rc geninfo_unexecuted_blocks=1 00:17:49.930 00:17:49.930 ' 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.930 --rc genhtml_branch_coverage=1 00:17:49.930 --rc genhtml_function_coverage=1 00:17:49.930 --rc genhtml_legend=1 00:17:49.930 --rc geninfo_all_blocks=1 00:17:49.930 --rc geninfo_unexecuted_blocks=1 00:17:49.930 00:17:49.930 ' 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.930 --rc genhtml_branch_coverage=1 00:17:49.930 --rc genhtml_function_coverage=1 00:17:49.930 --rc genhtml_legend=1 00:17:49.930 --rc geninfo_all_blocks=1 00:17:49.930 --rc geninfo_unexecuted_blocks=1 00:17:49.930 00:17:49.930 ' 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.930 --rc genhtml_branch_coverage=1 00:17:49.930 --rc genhtml_function_coverage=1 00:17:49.930 --rc genhtml_legend=1 00:17:49.930 --rc geninfo_all_blocks=1 00:17:49.930 --rc geninfo_unexecuted_blocks=1 00:17:49.930 00:17:49.930 ' 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:49.930 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:49.931 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@223 -- # create_target_ns 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:49.931 11:01:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # return 0 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up target0 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:49.931 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:50.192 10.0.0.1 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:50.192 10.0.0.2 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:50.192 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up target1 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772163 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:50.193 10.0.0.3 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772164 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:50.193 10.0.0.4 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:50.193 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:50.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:17:50.454 00:17:50.454 --- 10.0.0.1 ping statistics --- 00:17:50.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.454 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target0 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:50.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:50.454 00:17:50.454 --- 10.0.0.2 ping statistics --- 00:17:50.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.454 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:50.454 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:50.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:50.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.161 ms 00:17:50.454 00:17:50.454 --- 10.0.0.3 ping statistics --- 00:17:50.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.454 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:50.455 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:50.455 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:17:50.455 00:17:50.455 --- 10.0.0.4 ping statistics --- 00:17:50.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.455 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # return 0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target0 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target1 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:50.455 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=71067 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 71067 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71067 ']' 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.714 11:01:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.714 [2024-12-05 11:01:17.682684] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:17:50.714 [2024-12-05 11:01:17.682753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.714 [2024-12-05 11:01:17.821048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.714 [2024-12-05 11:01:17.873675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.714 [2024-12-05 11:01:17.873729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.714 [2024-12-05 11:01:17.873739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.715 [2024-12-05 11:01:17.873747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.715 [2024-12-05 11:01:17.873754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.715 [2024-12-05 11:01:17.874077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.651 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.651 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:51.651 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:51.651 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.651 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.651 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.651 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:51.651 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:51.910 true 00:17:51.910 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.910 11:01:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:52.170 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:52.170 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:52.170 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:52.429 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.429 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:52.689 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:52.689 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:52.689 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:52.689 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.689 11:01:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:52.949 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:52.949 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:52.949 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:52.949 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:53.209 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:53.209 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:53.209 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:53.468 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.468 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:53.726 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:53.726 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:53.726 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:53.985 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:53.985 11:01:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.X4pbPUgBtZ 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.YBDFgniYQc 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.X4pbPUgBtZ 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.YBDFgniYQc 00:17:54.244 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:54.503 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:54.762 [2024-12-05 11:01:21.766241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:54.762 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.X4pbPUgBtZ 00:17:54.762 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.X4pbPUgBtZ 00:17:54.762 11:01:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:55.022 [2024-12-05 11:01:22.026817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.022 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:55.282 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:55.541 [2024-12-05 11:01:22.462411] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:55.541 [2024-12-05 11:01:22.462741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.541 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:55.541 malloc0 00:17:55.541 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.800 11:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.X4pbPUgBtZ 00:17:56.060 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:56.320 11:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.X4pbPUgBtZ 00:18:06.352 Initializing NVMe Controllers 00:18:06.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:06.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:06.352 Initialization complete. Launching workers. 00:18:06.352 ======================================================== 00:18:06.352 Latency(us) 00:18:06.352 Device Information : IOPS MiB/s Average min max 00:18:06.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11793.20 46.07 5427.90 1005.00 6492.29 00:18:06.352 ======================================================== 00:18:06.352 Total : 11793.20 46.07 5427.90 1005.00 6492.29 00:18:06.352 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.X4pbPUgBtZ 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.X4pbPUgBtZ 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71294 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71294 /var/tmp/bdevperf.sock 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71294 ']' 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.610 11:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.610 [2024-12-05 11:01:33.571313] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:06.610 [2024-12-05 11:01:33.571552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71294 ] 00:18:06.610 [2024-12-05 11:01:33.724932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.869 [2024-12-05 11:01:33.779224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.869 [2024-12-05 11:01:33.820978] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:07.440 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.440 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:07.440 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X4pbPUgBtZ 00:18:07.698 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:07.698 [2024-12-05 11:01:34.842482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.956 TLSTESTn1 00:18:07.956 11:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:07.956 Running I/O for 10 seconds... 00:18:10.263 4827.00 IOPS, 18.86 MiB/s [2024-12-05T11:01:38.360Z] 4907.50 IOPS, 19.17 MiB/s [2024-12-05T11:01:39.302Z] 4920.67 IOPS, 19.22 MiB/s [2024-12-05T11:01:40.239Z] 4869.75 IOPS, 19.02 MiB/s [2024-12-05T11:01:41.177Z] 4945.20 IOPS, 19.32 MiB/s [2024-12-05T11:01:42.115Z] 5014.17 IOPS, 19.59 MiB/s [2024-12-05T11:01:43.053Z] 5057.86 IOPS, 19.76 MiB/s [2024-12-05T11:01:44.431Z] 5081.38 IOPS, 19.85 MiB/s [2024-12-05T11:01:45.411Z] 5041.44 IOPS, 19.69 MiB/s [2024-12-05T11:01:45.411Z] 5027.50 IOPS, 19.64 MiB/s 00:18:18.252 Latency(us) 00:18:18.252 [2024-12-05T11:01:45.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.252 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:18.252 Verification LBA range: start 0x0 length 0x2000 00:18:18.252 TLSTESTn1 : 10.02 5032.07 19.66 0.00 0.00 25395.71 5421.85 35584.21 00:18:18.252 [2024-12-05T11:01:45.411Z] =================================================================================================================== 00:18:18.252 [2024-12-05T11:01:45.411Z] Total : 5032.07 19.66 0.00 0.00 25395.71 5421.85 35584.21 00:18:18.252 { 00:18:18.252 "results": [ 00:18:18.252 { 00:18:18.252 "job": "TLSTESTn1", 00:18:18.252 "core_mask": "0x4", 00:18:18.252 "workload": "verify", 00:18:18.252 "status": "finished", 00:18:18.252 "verify_range": { 00:18:18.252 "start": 0, 00:18:18.252 "length": 8192 00:18:18.252 }, 00:18:18.252 "queue_depth": 128, 00:18:18.252 "io_size": 4096, 00:18:18.252 "runtime": 10.015761, 00:18:18.252 "iops": 5032.068956118262, 00:18:18.252 "mibps": 19.65651935983696, 00:18:18.252 "io_failed": 0, 00:18:18.252 "io_timeout": 0, 00:18:18.252 "avg_latency_us": 25395.70785491171, 00:18:18.252 "min_latency_us": 5421.8538152610445, 00:18:18.252 "max_latency_us": 35584.20562248996 00:18:18.252 } 00:18:18.252 ], 00:18:18.252 "core_count": 1 00:18:18.252 } 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71294 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71294 ']' 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71294 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71294 00:18:18.252 killing process with pid 71294 00:18:18.252 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.252 00:18:18.252 Latency(us) 00:18:18.252 [2024-12-05T11:01:45.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.252 [2024-12-05T11:01:45.411Z] =================================================================================================================== 00:18:18.252 [2024-12-05T11:01:45.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71294' 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71294 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71294 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YBDFgniYQc 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YBDFgniYQc 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YBDFgniYQc 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.YBDFgniYQc 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71428 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71428 /var/tmp/bdevperf.sock 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71428 ']' 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.252 11:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.252 [2024-12-05 11:01:45.337420] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:18.252 [2024-12-05 11:01:45.337676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71428 ] 00:18:18.511 [2024-12-05 11:01:45.481735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.511 [2024-12-05 11:01:45.540336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.511 [2024-12-05 11:01:45.584733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:19.448 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.448 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.448 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YBDFgniYQc 00:18:19.448 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:19.708 [2024-12-05 11:01:46.803962] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.708 [2024-12-05 11:01:46.810634] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:19.708 [2024-12-05 11:01:46.811495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fcff0 (107): Transport endpoint is not connected 00:18:19.708 [2024-12-05 11:01:46.812482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fcff0 (9): Bad file descriptor 00:18:19.708 [2024-12-05 11:01:46.813479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:19.708 [2024-12-05 11:01:46.813499] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:19.708 [2024-12-05 11:01:46.813509] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:19.708 [2024-12-05 11:01:46.813523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:19.708 request: 00:18:19.708 { 00:18:19.708 "name": "TLSTEST", 00:18:19.708 "trtype": "tcp", 00:18:19.708 "traddr": "10.0.0.2", 00:18:19.708 "adrfam": "ipv4", 00:18:19.708 "trsvcid": "4420", 00:18:19.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.708 "prchk_reftag": false, 00:18:19.708 "prchk_guard": false, 00:18:19.708 "hdgst": false, 00:18:19.708 "ddgst": false, 00:18:19.708 "psk": "key0", 00:18:19.708 "allow_unrecognized_csi": false, 00:18:19.708 "method": "bdev_nvme_attach_controller", 00:18:19.708 "req_id": 1 00:18:19.708 } 00:18:19.708 Got JSON-RPC error response 00:18:19.708 response: 00:18:19.708 { 00:18:19.708 "code": -5, 00:18:19.708 "message": "Input/output error" 00:18:19.708 } 00:18:19.708 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71428 00:18:19.708 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71428 ']' 00:18:19.708 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71428 00:18:19.708 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:19.708 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.708 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71428 00:18:19.967 killing process with pid 71428 00:18:19.967 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.967 00:18:19.967 Latency(us) 00:18:19.967 [2024-12-05T11:01:47.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.967 [2024-12-05T11:01:47.126Z] =================================================================================================================== 00:18:19.967 [2024-12-05T11:01:47.126Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.967 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:19.967 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:19.967 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71428' 00:18:19.967 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71428 00:18:19.967 11:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71428 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X4pbPUgBtZ 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X4pbPUgBtZ 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.X4pbPUgBtZ 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.X4pbPUgBtZ 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71457 00:18:19.967 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.968 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.968 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71457 /var/tmp/bdevperf.sock 00:18:19.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.968 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71457 ']' 00:18:19.968 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.968 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.968 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.968 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.968 11:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.968 [2024-12-05 11:01:47.122096] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:19.968 [2024-12-05 11:01:47.122629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71457 ] 00:18:20.226 [2024-12-05 11:01:47.277033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.226 [2024-12-05 11:01:47.334703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.226 [2024-12-05 11:01:47.378922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:21.161 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:21.161 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:21.161 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X4pbPUgBtZ 00:18:21.161 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:21.420 [2024-12-05 11:01:48.501402] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.420 [2024-12-05 11:01:48.507495] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:21.420 [2024-12-05 11:01:48.507654] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:21.420 [2024-12-05 11:01:48.507709] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:21.420 [2024-12-05 11:01:48.507762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c7ff0 (107): Transport endpoint is not connected 00:18:21.420 [2024-12-05 11:01:48.508750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c7ff0 (9): Bad file descriptor 00:18:21.420 [2024-12-05 11:01:48.509747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:21.420 [2024-12-05 11:01:48.509898] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:21.420 [2024-12-05 11:01:48.509912] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:21.420 [2024-12-05 11:01:48.509929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:21.420 request: 00:18:21.420 { 00:18:21.420 "name": "TLSTEST", 00:18:21.420 "trtype": "tcp", 00:18:21.420 "traddr": "10.0.0.2", 00:18:21.420 "adrfam": "ipv4", 00:18:21.420 "trsvcid": "4420", 00:18:21.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.420 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:21.420 "prchk_reftag": false, 00:18:21.420 "prchk_guard": false, 00:18:21.420 "hdgst": false, 00:18:21.420 "ddgst": false, 00:18:21.420 "psk": "key0", 00:18:21.420 "allow_unrecognized_csi": false, 00:18:21.420 "method": "bdev_nvme_attach_controller", 00:18:21.420 "req_id": 1 00:18:21.420 } 00:18:21.420 Got JSON-RPC error response 00:18:21.420 response: 00:18:21.420 { 00:18:21.420 "code": -5, 00:18:21.420 "message": "Input/output error" 00:18:21.420 } 00:18:21.420 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71457 00:18:21.420 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71457 ']' 00:18:21.420 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71457 00:18:21.420 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:21.420 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.420 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71457 00:18:21.679 killing process with pid 71457 00:18:21.679 Received shutdown signal, test time was about 10.000000 seconds 00:18:21.679 00:18:21.679 Latency(us) 00:18:21.679 [2024-12-05T11:01:48.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.679 [2024-12-05T11:01:48.838Z] =================================================================================================================== 00:18:21.679 [2024-12-05T11:01:48.838Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71457' 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71457 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71457 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X4pbPUgBtZ 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X4pbPUgBtZ 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.X4pbPUgBtZ 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.X4pbPUgBtZ 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71491 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71491 /var/tmp/bdevperf.sock 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71491 ']' 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.679 11:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:21.679 [2024-12-05 11:01:48.812616] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:21.679 [2024-12-05 11:01:48.812689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71491 ] 00:18:21.938 [2024-12-05 11:01:48.968038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.938 [2024-12-05 11:01:49.024117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.938 [2024-12-05 11:01:49.067527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.875 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.875 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:22.875 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.X4pbPUgBtZ 00:18:22.875 11:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:23.135 [2024-12-05 11:01:50.194178] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.135 [2024-12-05 11:01:50.204889] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:23.135 [2024-12-05 11:01:50.205423] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:23.135 [2024-12-05 11:01:50.205723] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:23.135 [2024-12-05 11:01:50.206082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b39ff0 (107): Transport endpoint is not connected 00:18:23.135 [2024-12-05 11:01:50.207073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b39ff0 (9): Bad file descriptor 00:18:23.135 [2024-12-05 11:01:50.208068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:23.135 [2024-12-05 11:01:50.208286] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:23.135 [2024-12-05 11:01:50.208537] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:23.135 [2024-12-05 11:01:50.208653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:23.135 request: 00:18:23.135 { 00:18:23.135 "name": "TLSTEST", 00:18:23.135 "trtype": "tcp", 00:18:23.135 "traddr": "10.0.0.2", 00:18:23.135 "adrfam": "ipv4", 00:18:23.135 "trsvcid": "4420", 00:18:23.135 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:23.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.135 "prchk_reftag": false, 00:18:23.135 "prchk_guard": false, 00:18:23.135 "hdgst": false, 00:18:23.135 "ddgst": false, 00:18:23.135 "psk": "key0", 00:18:23.135 "allow_unrecognized_csi": false, 00:18:23.135 "method": "bdev_nvme_attach_controller", 00:18:23.135 "req_id": 1 00:18:23.135 } 00:18:23.135 Got JSON-RPC error response 00:18:23.135 response: 00:18:23.135 { 00:18:23.135 "code": -5, 00:18:23.135 "message": "Input/output error" 00:18:23.135 } 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71491 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71491 ']' 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71491 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71491 00:18:23.135 killing process with pid 71491 00:18:23.135 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.135 00:18:23.135 Latency(us) 00:18:23.135 [2024-12-05T11:01:50.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.135 [2024-12-05T11:01:50.294Z] =================================================================================================================== 00:18:23.135 [2024-12-05T11:01:50.294Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71491' 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71491 00:18:23.135 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71491 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.393 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71514 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71514 /var/tmp/bdevperf.sock 00:18:23.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71514 ']' 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.394 11:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.394 [2024-12-05 11:01:50.510494] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:23.394 [2024-12-05 11:01:50.510724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71514 ] 00:18:23.652 [2024-12-05 11:01:50.672313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.652 [2024-12-05 11:01:50.730609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.652 [2024-12-05 11:01:50.774280] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:24.585 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.585 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.585 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:24.585 [2024-12-05 11:01:51.617177] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:24.585 [2024-12-05 11:01:51.617628] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:24.585 request: 00:18:24.585 { 00:18:24.585 "name": "key0", 00:18:24.585 "path": "", 00:18:24.585 "method": "keyring_file_add_key", 00:18:24.585 "req_id": 1 00:18:24.585 } 00:18:24.585 Got JSON-RPC error response 00:18:24.585 response: 00:18:24.585 { 00:18:24.585 "code": -1, 00:18:24.585 "message": "Operation not permitted" 00:18:24.585 } 00:18:24.585 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.843 [2024-12-05 11:01:51.856950] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.843 [2024-12-05 11:01:51.857456] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:24.843 request: 00:18:24.843 { 00:18:24.843 "name": "TLSTEST", 00:18:24.843 "trtype": "tcp", 00:18:24.843 "traddr": "10.0.0.2", 00:18:24.843 "adrfam": "ipv4", 00:18:24.843 "trsvcid": "4420", 00:18:24.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.843 "prchk_reftag": false, 00:18:24.843 "prchk_guard": false, 00:18:24.844 "hdgst": false, 00:18:24.844 "ddgst": false, 00:18:24.844 "psk": "key0", 00:18:24.844 "allow_unrecognized_csi": false, 00:18:24.844 "method": "bdev_nvme_attach_controller", 00:18:24.844 "req_id": 1 00:18:24.844 } 00:18:24.844 Got JSON-RPC error response 00:18:24.844 response: 00:18:24.844 { 00:18:24.844 "code": -126, 00:18:24.844 "message": "Required key not available" 00:18:24.844 } 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71514 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71514 ']' 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71514 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71514 00:18:24.844 killing process with pid 71514 00:18:24.844 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.844 00:18:24.844 Latency(us) 00:18:24.844 [2024-12-05T11:01:52.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.844 [2024-12-05T11:01:52.003Z] =================================================================================================================== 00:18:24.844 [2024-12-05T11:01:52.003Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71514' 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71514 00:18:24.844 11:01:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71514 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71067 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71067 ']' 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71067 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71067 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:25.102 killing process with pid 71067 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71067' 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71067 00:18:25.102 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71067 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.dnyitBhKEg 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.dnyitBhKEg 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=71558 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 71558 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71558 ']' 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.360 11:01:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.360 [2024-12-05 11:01:52.431363] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:25.360 [2024-12-05 11:01:52.431583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.617 [2024-12-05 11:01:52.585057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.617 [2024-12-05 11:01:52.634109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.617 [2024-12-05 11:01:52.634343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.617 [2024-12-05 11:01:52.634361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.617 [2024-12-05 11:01:52.634369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.617 [2024-12-05 11:01:52.634378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.617 [2024-12-05 11:01:52.634663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.617 [2024-12-05 11:01:52.676030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.184 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.184 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.184 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:26.184 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.184 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.443 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.443 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.dnyitBhKEg 00:18:26.443 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dnyitBhKEg 00:18:26.443 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:26.701 [2024-12-05 11:01:53.621461] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.701 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:26.959 11:01:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:26.959 [2024-12-05 11:01:54.092816] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.959 [2024-12-05 11:01:54.093047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.216 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:27.216 malloc0 00:18:27.474 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:27.732 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:18:27.732 11:01:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dnyitBhKEg 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dnyitBhKEg 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71619 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71619 /var/tmp/bdevperf.sock 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71619 ']' 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.990 11:01:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.248 [2024-12-05 11:01:55.199492] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:28.248 [2024-12-05 11:01:55.199847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71619 ] 00:18:28.248 [2024-12-05 11:01:55.362692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.505 [2024-12-05 11:01:55.415950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.505 [2024-12-05 11:01:55.457287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.072 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.072 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.072 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:18:29.332 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.592 [2024-12-05 11:01:56.658191] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.592 TLSTESTn1 00:18:29.851 11:01:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:29.851 Running I/O for 10 seconds... 00:18:31.723 5816.00 IOPS, 22.72 MiB/s [2024-12-05T11:02:00.259Z] 5789.50 IOPS, 22.62 MiB/s [2024-12-05T11:02:01.196Z] 5761.00 IOPS, 22.50 MiB/s [2024-12-05T11:02:02.136Z] 5751.50 IOPS, 22.47 MiB/s [2024-12-05T11:02:03.071Z] 5734.60 IOPS, 22.40 MiB/s [2024-12-05T11:02:04.007Z] 5746.50 IOPS, 22.45 MiB/s [2024-12-05T11:02:04.945Z] 5759.29 IOPS, 22.50 MiB/s [2024-12-05T11:02:05.882Z] 5765.50 IOPS, 22.52 MiB/s [2024-12-05T11:02:07.259Z] 5768.67 IOPS, 22.53 MiB/s [2024-12-05T11:02:07.259Z] 5774.60 IOPS, 22.56 MiB/s 00:18:40.100 Latency(us) 00:18:40.100 [2024-12-05T11:02:07.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.100 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.101 Verification LBA range: start 0x0 length 0x2000 00:18:40.101 TLSTESTn1 : 10.01 5779.91 22.58 0.00 0.00 22111.75 4290.11 16739.32 00:18:40.101 [2024-12-05T11:02:07.260Z] =================================================================================================================== 00:18:40.101 [2024-12-05T11:02:07.260Z] Total : 5779.91 22.58 0.00 0.00 22111.75 4290.11 16739.32 00:18:40.101 { 00:18:40.101 "results": [ 00:18:40.101 { 00:18:40.101 "job": "TLSTESTn1", 00:18:40.101 "core_mask": "0x4", 00:18:40.101 "workload": "verify", 00:18:40.101 "status": "finished", 00:18:40.101 "verify_range": { 00:18:40.101 "start": 0, 00:18:40.101 "length": 8192 00:18:40.101 }, 00:18:40.101 "queue_depth": 128, 00:18:40.101 "io_size": 4096, 00:18:40.101 "runtime": 10.012265, 00:18:40.101 "iops": 5779.910939233031, 00:18:40.101 "mibps": 22.577777106379028, 00:18:40.101 "io_failed": 0, 00:18:40.101 "io_timeout": 0, 00:18:40.101 "avg_latency_us": 22111.754019610496, 00:18:40.101 "min_latency_us": 4290.107630522089, 00:18:40.101 "max_latency_us": 16739.315662650602 00:18:40.101 } 00:18:40.101 ], 00:18:40.101 "core_count": 1 00:18:40.101 } 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71619 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71619 ']' 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71619 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71619 00:18:40.101 killing process with pid 71619 00:18:40.101 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.101 00:18:40.101 Latency(us) 00:18:40.101 [2024-12-05T11:02:07.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.101 [2024-12-05T11:02:07.260Z] =================================================================================================================== 00:18:40.101 [2024-12-05T11:02:07.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71619' 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71619 00:18:40.101 11:02:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71619 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.dnyitBhKEg 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dnyitBhKEg 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dnyitBhKEg 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dnyitBhKEg 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dnyitBhKEg 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71756 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71756 /var/tmp/bdevperf.sock 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71756 ']' 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.101 11:02:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.101 [2024-12-05 11:02:07.134402] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:40.101 [2024-12-05 11:02:07.134579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71756 ] 00:18:40.360 [2024-12-05 11:02:07.284124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.360 [2024-12-05 11:02:07.336494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:40.360 [2024-12-05 11:02:07.377974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.928 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.928 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:40.928 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:18:41.187 [2024-12-05 11:02:08.243553] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dnyitBhKEg': 0100666 00:18:41.187 [2024-12-05 11:02:08.243597] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:41.187 request: 00:18:41.187 { 00:18:41.187 "name": "key0", 00:18:41.187 "path": "/tmp/tmp.dnyitBhKEg", 00:18:41.187 "method": "keyring_file_add_key", 00:18:41.187 "req_id": 1 00:18:41.187 } 00:18:41.187 Got JSON-RPC error response 00:18:41.187 response: 00:18:41.187 { 00:18:41.187 "code": -1, 00:18:41.187 "message": "Operation not permitted" 00:18:41.187 } 00:18:41.187 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:41.447 [2024-12-05 11:02:08.459335] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.447 [2024-12-05 11:02:08.459393] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:41.447 request: 00:18:41.447 { 00:18:41.447 "name": "TLSTEST", 00:18:41.447 "trtype": "tcp", 00:18:41.447 "traddr": "10.0.0.2", 00:18:41.447 "adrfam": "ipv4", 00:18:41.447 "trsvcid": "4420", 00:18:41.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.447 "prchk_reftag": false, 00:18:41.447 "prchk_guard": false, 00:18:41.447 "hdgst": false, 00:18:41.447 "ddgst": false, 00:18:41.447 "psk": "key0", 00:18:41.447 "allow_unrecognized_csi": false, 00:18:41.447 "method": "bdev_nvme_attach_controller", 00:18:41.447 "req_id": 1 00:18:41.447 } 00:18:41.447 Got JSON-RPC error response 00:18:41.447 response: 00:18:41.447 { 00:18:41.447 "code": -126, 00:18:41.447 "message": "Required key not available" 00:18:41.447 } 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71756 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71756 ']' 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71756 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71756 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71756' 00:18:41.447 killing process with pid 71756 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71756 00:18:41.447 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.447 00:18:41.447 Latency(us) 00:18:41.447 [2024-12-05T11:02:08.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.447 [2024-12-05T11:02:08.606Z] =================================================================================================================== 00:18:41.447 [2024-12-05T11:02:08.606Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.447 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71756 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71558 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71558 ']' 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71558 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71558 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71558' 00:18:41.719 killing process with pid 71558 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71558 00:18:41.719 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71558 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=71789 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 71789 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71789 ']' 00:18:41.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.982 11:02:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.982 [2024-12-05 11:02:08.969318] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:41.982 [2024-12-05 11:02:08.969381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.982 [2024-12-05 11:02:09.115066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.241 [2024-12-05 11:02:09.163037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.241 [2024-12-05 11:02:09.163086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.241 [2024-12-05 11:02:09.163096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.241 [2024-12-05 11:02:09.163105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.241 [2024-12-05 11:02:09.163112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.241 [2024-12-05 11:02:09.163401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.241 [2024-12-05 11:02:09.204896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.dnyitBhKEg 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.dnyitBhKEg 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.dnyitBhKEg 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dnyitBhKEg 00:18:42.808 11:02:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:43.066 [2024-12-05 11:02:10.098416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.067 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:43.326 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:43.585 [2024-12-05 11:02:10.505983] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.585 [2024-12-05 11:02:10.506210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.585 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:43.585 malloc0 00:18:43.585 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:43.844 11:02:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:18:44.102 [2024-12-05 11:02:11.161780] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dnyitBhKEg': 0100666 00:18:44.102 [2024-12-05 11:02:11.161968] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:44.102 request: 00:18:44.102 { 00:18:44.102 "name": "key0", 00:18:44.102 "path": "/tmp/tmp.dnyitBhKEg", 00:18:44.102 "method": "keyring_file_add_key", 00:18:44.102 "req_id": 1 00:18:44.102 } 00:18:44.102 Got JSON-RPC error response 00:18:44.102 response: 00:18:44.102 { 00:18:44.102 "code": -1, 00:18:44.102 "message": "Operation not permitted" 00:18:44.102 } 00:18:44.102 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:44.362 [2024-12-05 11:02:11.393473] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:44.362 [2024-12-05 11:02:11.393543] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:44.362 request: 00:18:44.362 { 00:18:44.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.362 "host": "nqn.2016-06.io.spdk:host1", 00:18:44.362 "psk": "key0", 00:18:44.362 "method": "nvmf_subsystem_add_host", 00:18:44.362 "req_id": 1 00:18:44.362 } 00:18:44.362 Got JSON-RPC error response 00:18:44.362 response: 00:18:44.362 { 00:18:44.362 "code": -32603, 00:18:44.362 "message": "Internal error" 00:18:44.362 } 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71789 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71789 ']' 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71789 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71789 00:18:44.362 killing process with pid 71789 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71789' 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71789 00:18:44.362 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71789 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.dnyitBhKEg 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=71853 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 71853 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71853 ']' 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.621 11:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.621 [2024-12-05 11:02:11.713092] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:44.621 [2024-12-05 11:02:11.713167] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.880 [2024-12-05 11:02:11.850022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.880 [2024-12-05 11:02:11.902023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.880 [2024-12-05 11:02:11.902070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.880 [2024-12-05 11:02:11.902080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.880 [2024-12-05 11:02:11.902089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.880 [2024-12-05 11:02:11.902096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.880 [2024-12-05 11:02:11.902387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.880 [2024-12-05 11:02:11.944449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.dnyitBhKEg 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dnyitBhKEg 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:45.814 [2024-12-05 11:02:12.858604] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.814 11:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:46.071 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:46.329 [2024-12-05 11:02:13.290126] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.329 [2024-12-05 11:02:13.290826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.329 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:46.586 malloc0 00:18:46.586 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:46.987 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:18:46.987 11:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71904 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71904 /var/tmp/bdevperf.sock 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71904 ']' 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.247 11:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.247 [2024-12-05 11:02:14.240234] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:47.247 [2024-12-05 11:02:14.240465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71904 ] 00:18:47.247 [2024-12-05 11:02:14.391834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.505 [2024-12-05 11:02:14.446840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.505 [2024-12-05 11:02:14.488321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:48.074 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.074 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.074 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:18:48.335 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.592 [2024-12-05 11:02:15.545584] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.592 TLSTESTn1 00:18:48.592 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:48.850 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:48.850 "subsystems": [ 00:18:48.850 { 00:18:48.850 "subsystem": "keyring", 00:18:48.850 "config": [ 00:18:48.850 { 00:18:48.850 "method": "keyring_file_add_key", 00:18:48.850 "params": { 00:18:48.850 "name": "key0", 00:18:48.850 "path": "/tmp/tmp.dnyitBhKEg" 00:18:48.850 } 00:18:48.850 } 00:18:48.850 ] 00:18:48.850 }, 00:18:48.850 { 00:18:48.850 "subsystem": "iobuf", 00:18:48.850 "config": [ 00:18:48.850 { 00:18:48.850 "method": "iobuf_set_options", 00:18:48.850 "params": { 00:18:48.850 "small_pool_count": 8192, 00:18:48.850 "large_pool_count": 1024, 00:18:48.850 "small_bufsize": 8192, 00:18:48.850 "large_bufsize": 135168, 00:18:48.850 "enable_numa": false 00:18:48.850 } 00:18:48.850 } 00:18:48.850 ] 00:18:48.850 }, 00:18:48.850 { 00:18:48.850 "subsystem": "sock", 00:18:48.850 "config": [ 00:18:48.850 { 00:18:48.850 "method": "sock_set_default_impl", 00:18:48.850 "params": { 00:18:48.850 "impl_name": "uring" 00:18:48.850 } 00:18:48.850 }, 00:18:48.850 { 00:18:48.850 "method": "sock_impl_set_options", 00:18:48.850 "params": { 00:18:48.850 "impl_name": "ssl", 00:18:48.850 "recv_buf_size": 4096, 00:18:48.850 "send_buf_size": 4096, 00:18:48.850 "enable_recv_pipe": true, 00:18:48.850 "enable_quickack": false, 00:18:48.850 "enable_placement_id": 0, 00:18:48.850 "enable_zerocopy_send_server": true, 00:18:48.850 "enable_zerocopy_send_client": false, 00:18:48.850 "zerocopy_threshold": 0, 00:18:48.850 "tls_version": 0, 00:18:48.850 "enable_ktls": false 00:18:48.850 } 00:18:48.850 }, 00:18:48.850 { 00:18:48.850 "method": "sock_impl_set_options", 00:18:48.850 "params": { 00:18:48.850 "impl_name": "posix", 00:18:48.850 "recv_buf_size": 2097152, 00:18:48.851 "send_buf_size": 2097152, 00:18:48.851 "enable_recv_pipe": true, 00:18:48.851 "enable_quickack": false, 00:18:48.851 "enable_placement_id": 0, 00:18:48.851 "enable_zerocopy_send_server": true, 00:18:48.851 "enable_zerocopy_send_client": false, 00:18:48.851 "zerocopy_threshold": 0, 00:18:48.851 "tls_version": 0, 00:18:48.851 "enable_ktls": false 00:18:48.851 } 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "method": "sock_impl_set_options", 00:18:48.851 "params": { 00:18:48.851 "impl_name": "uring", 00:18:48.851 "recv_buf_size": 2097152, 00:18:48.851 "send_buf_size": 2097152, 00:18:48.851 "enable_recv_pipe": true, 00:18:48.851 "enable_quickack": false, 00:18:48.851 "enable_placement_id": 0, 00:18:48.851 "enable_zerocopy_send_server": false, 00:18:48.851 "enable_zerocopy_send_client": false, 00:18:48.851 "zerocopy_threshold": 0, 00:18:48.851 "tls_version": 0, 00:18:48.851 "enable_ktls": false 00:18:48.851 } 00:18:48.851 } 00:18:48.851 ] 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "subsystem": "vmd", 00:18:48.851 "config": [] 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "subsystem": "accel", 00:18:48.851 "config": [ 00:18:48.851 { 00:18:48.851 "method": "accel_set_options", 00:18:48.851 "params": { 00:18:48.851 "small_cache_size": 128, 00:18:48.851 "large_cache_size": 16, 00:18:48.851 "task_count": 2048, 00:18:48.851 "sequence_count": 2048, 00:18:48.851 "buf_count": 2048 00:18:48.851 } 00:18:48.851 } 00:18:48.851 ] 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "subsystem": "bdev", 00:18:48.851 "config": [ 00:18:48.851 { 00:18:48.851 "method": "bdev_set_options", 00:18:48.851 "params": { 00:18:48.851 "bdev_io_pool_size": 65535, 00:18:48.851 "bdev_io_cache_size": 256, 00:18:48.851 "bdev_auto_examine": true, 00:18:48.851 "iobuf_small_cache_size": 128, 00:18:48.851 "iobuf_large_cache_size": 16 00:18:48.851 } 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "method": "bdev_raid_set_options", 00:18:48.851 "params": { 00:18:48.851 "process_window_size_kb": 1024, 00:18:48.851 "process_max_bandwidth_mb_sec": 0 00:18:48.851 } 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "method": "bdev_iscsi_set_options", 00:18:48.851 "params": { 00:18:48.851 "timeout_sec": 30 00:18:48.851 } 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "method": "bdev_nvme_set_options", 00:18:48.851 "params": { 00:18:48.851 "action_on_timeout": "none", 00:18:48.851 "timeout_us": 0, 00:18:48.851 "timeout_admin_us": 0, 00:18:48.851 "keep_alive_timeout_ms": 10000, 00:18:48.851 "arbitration_burst": 0, 00:18:48.851 "low_priority_weight": 0, 00:18:48.851 "medium_priority_weight": 0, 00:18:48.851 "high_priority_weight": 0, 00:18:48.851 "nvme_adminq_poll_period_us": 10000, 00:18:48.851 "nvme_ioq_poll_period_us": 0, 00:18:48.851 "io_queue_requests": 0, 00:18:48.851 "delay_cmd_submit": true, 00:18:48.851 "transport_retry_count": 4, 00:18:48.851 "bdev_retry_count": 3, 00:18:48.851 "transport_ack_timeout": 0, 00:18:48.851 "ctrlr_loss_timeout_sec": 0, 00:18:48.851 "reconnect_delay_sec": 0, 00:18:48.851 "fast_io_fail_timeout_sec": 0, 00:18:48.851 "disable_auto_failback": false, 00:18:48.851 "generate_uuids": false, 00:18:48.851 "transport_tos": 0, 00:18:48.851 "nvme_error_stat": false, 00:18:48.851 "rdma_srq_size": 0, 00:18:48.851 "io_path_stat": false, 00:18:48.851 "allow_accel_sequence": false, 00:18:48.851 "rdma_max_cq_size": 0, 00:18:48.851 "rdma_cm_event_timeout_ms": 0, 00:18:48.851 "dhchap_digests": [ 00:18:48.851 "sha256", 00:18:48.851 "sha384", 00:18:48.851 "sha512" 00:18:48.851 ], 00:18:48.851 "dhchap_dhgroups": [ 00:18:48.851 "null", 00:18:48.851 "ffdhe2048", 00:18:48.851 "ffdhe3072", 00:18:48.851 "ffdhe4096", 00:18:48.851 "ffdhe6144", 00:18:48.851 "ffdhe8192" 00:18:48.851 ] 00:18:48.851 } 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "method": "bdev_nvme_set_hotplug", 00:18:48.851 "params": { 00:18:48.851 "period_us": 100000, 00:18:48.851 "enable": false 00:18:48.851 } 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "method": "bdev_malloc_create", 00:18:48.851 "params": { 00:18:48.851 "name": "malloc0", 00:18:48.851 "num_blocks": 8192, 00:18:48.851 "block_size": 4096, 00:18:48.851 "physical_block_size": 4096, 00:18:48.851 "uuid": "1a0b89ff-151b-4dfc-a1b1-2ec94d6e20f2", 00:18:48.851 "optimal_io_boundary": 0, 00:18:48.851 "md_size": 0, 00:18:48.851 "dif_type": 0, 00:18:48.851 "dif_is_head_of_md": false, 00:18:48.851 "dif_pi_format": 0 00:18:48.851 } 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "method": "bdev_wait_for_examine" 00:18:48.851 } 00:18:48.851 ] 00:18:48.851 }, 00:18:48.851 { 00:18:48.851 "subsystem": "nbd", 00:18:48.852 "config": [] 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "subsystem": "scheduler", 00:18:48.852 "config": [ 00:18:48.852 { 00:18:48.852 "method": "framework_set_scheduler", 00:18:48.852 "params": { 00:18:48.852 "name": "static" 00:18:48.852 } 00:18:48.852 } 00:18:48.852 ] 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "subsystem": "nvmf", 00:18:48.852 "config": [ 00:18:48.852 { 00:18:48.852 "method": "nvmf_set_config", 00:18:48.852 "params": { 00:18:48.852 "discovery_filter": "match_any", 00:18:48.852 "admin_cmd_passthru": { 00:18:48.852 "identify_ctrlr": false 00:18:48.852 }, 00:18:48.852 "dhchap_digests": [ 00:18:48.852 "sha256", 00:18:48.852 "sha384", 00:18:48.852 "sha512" 00:18:48.852 ], 00:18:48.852 "dhchap_dhgroups": [ 00:18:48.852 "null", 00:18:48.852 "ffdhe2048", 00:18:48.852 "ffdhe3072", 00:18:48.852 "ffdhe4096", 00:18:48.852 "ffdhe6144", 00:18:48.852 "ffdhe8192" 00:18:48.852 ] 00:18:48.852 } 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "method": "nvmf_set_max_subsystems", 00:18:48.852 "params": { 00:18:48.852 "max_subsystems": 1024 00:18:48.852 } 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "method": "nvmf_set_crdt", 00:18:48.852 "params": { 00:18:48.852 "crdt1": 0, 00:18:48.852 "crdt2": 0, 00:18:48.852 "crdt3": 0 00:18:48.852 } 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "method": "nvmf_create_transport", 00:18:48.852 "params": { 00:18:48.852 "trtype": "TCP", 00:18:48.852 "max_queue_depth": 128, 00:18:48.852 "max_io_qpairs_per_ctrlr": 127, 00:18:48.852 "in_capsule_data_size": 4096, 00:18:48.852 "max_io_size": 131072, 00:18:48.852 "io_unit_size": 131072, 00:18:48.852 "max_aq_depth": 128, 00:18:48.852 "num_shared_buffers": 511, 00:18:48.852 "buf_cache_size": 4294967295, 00:18:48.852 "dif_insert_or_strip": false, 00:18:48.852 "zcopy": false, 00:18:48.852 "c2h_success": false, 00:18:48.852 "sock_priority": 0, 00:18:48.852 "abort_timeout_sec": 1, 00:18:48.852 "ack_timeout": 0, 00:18:48.852 "data_wr_pool_size": 0 00:18:48.852 } 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "method": "nvmf_create_subsystem", 00:18:48.852 "params": { 00:18:48.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.852 "allow_any_host": false, 00:18:48.852 "serial_number": "SPDK00000000000001", 00:18:48.852 "model_number": "SPDK bdev Controller", 00:18:48.852 "max_namespaces": 10, 00:18:48.852 "min_cntlid": 1, 00:18:48.852 "max_cntlid": 65519, 00:18:48.852 "ana_reporting": false 00:18:48.852 } 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "method": "nvmf_subsystem_add_host", 00:18:48.852 "params": { 00:18:48.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.852 "host": "nqn.2016-06.io.spdk:host1", 00:18:48.852 "psk": "key0" 00:18:48.852 } 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "method": "nvmf_subsystem_add_ns", 00:18:48.852 "params": { 00:18:48.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.852 "namespace": { 00:18:48.852 "nsid": 1, 00:18:48.852 "bdev_name": "malloc0", 00:18:48.852 "nguid": "1A0B89FF151B4DFCA1B12EC94D6E20F2", 00:18:48.852 "uuid": "1a0b89ff-151b-4dfc-a1b1-2ec94d6e20f2", 00:18:48.852 "no_auto_visible": false 00:18:48.852 } 00:18:48.852 } 00:18:48.852 }, 00:18:48.852 { 00:18:48.852 "method": "nvmf_subsystem_add_listener", 00:18:48.852 "params": { 00:18:48.852 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.852 "listen_address": { 00:18:48.852 "trtype": "TCP", 00:18:48.852 "adrfam": "IPv4", 00:18:48.852 "traddr": "10.0.0.2", 00:18:48.852 "trsvcid": "4420" 00:18:48.852 }, 00:18:48.852 "secure_channel": true 00:18:48.852 } 00:18:48.852 } 00:18:48.852 ] 00:18:48.852 } 00:18:48.852 ] 00:18:48.852 }' 00:18:48.852 11:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:49.110 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:49.110 "subsystems": [ 00:18:49.110 { 00:18:49.110 "subsystem": "keyring", 00:18:49.110 "config": [ 00:18:49.110 { 00:18:49.110 "method": "keyring_file_add_key", 00:18:49.110 "params": { 00:18:49.110 "name": "key0", 00:18:49.110 "path": "/tmp/tmp.dnyitBhKEg" 00:18:49.110 } 00:18:49.110 } 00:18:49.110 ] 00:18:49.110 }, 00:18:49.110 { 00:18:49.110 "subsystem": "iobuf", 00:18:49.110 "config": [ 00:18:49.110 { 00:18:49.110 "method": "iobuf_set_options", 00:18:49.110 "params": { 00:18:49.110 "small_pool_count": 8192, 00:18:49.110 "large_pool_count": 1024, 00:18:49.110 "small_bufsize": 8192, 00:18:49.110 "large_bufsize": 135168, 00:18:49.110 "enable_numa": false 00:18:49.110 } 00:18:49.110 } 00:18:49.110 ] 00:18:49.110 }, 00:18:49.110 { 00:18:49.110 "subsystem": "sock", 00:18:49.110 "config": [ 00:18:49.110 { 00:18:49.110 "method": "sock_set_default_impl", 00:18:49.110 "params": { 00:18:49.110 "impl_name": "uring" 00:18:49.110 } 00:18:49.110 }, 00:18:49.111 { 00:18:49.111 "method": "sock_impl_set_options", 00:18:49.111 "params": { 00:18:49.111 "impl_name": "ssl", 00:18:49.111 "recv_buf_size": 4096, 00:18:49.111 "send_buf_size": 4096, 00:18:49.111 "enable_recv_pipe": true, 00:18:49.111 "enable_quickack": false, 00:18:49.111 "enable_placement_id": 0, 00:18:49.111 "enable_zerocopy_send_server": true, 00:18:49.111 "enable_zerocopy_send_client": false, 00:18:49.111 "zerocopy_threshold": 0, 00:18:49.111 "tls_version": 0, 00:18:49.111 "enable_ktls": false 00:18:49.111 } 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "method": "sock_impl_set_options", 00:18:49.111 "params": { 00:18:49.111 "impl_name": "posix", 00:18:49.111 "recv_buf_size": 2097152, 00:18:49.111 "send_buf_size": 2097152, 00:18:49.111 "enable_recv_pipe": true, 00:18:49.111 "enable_quickack": false, 00:18:49.111 "enable_placement_id": 0, 00:18:49.111 "enable_zerocopy_send_server": true, 00:18:49.111 "enable_zerocopy_send_client": false, 00:18:49.111 "zerocopy_threshold": 0, 00:18:49.111 "tls_version": 0, 00:18:49.111 "enable_ktls": false 00:18:49.111 } 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "method": "sock_impl_set_options", 00:18:49.111 "params": { 00:18:49.111 "impl_name": "uring", 00:18:49.111 "recv_buf_size": 2097152, 00:18:49.111 "send_buf_size": 2097152, 00:18:49.111 "enable_recv_pipe": true, 00:18:49.111 "enable_quickack": false, 00:18:49.111 "enable_placement_id": 0, 00:18:49.111 "enable_zerocopy_send_server": false, 00:18:49.111 "enable_zerocopy_send_client": false, 00:18:49.111 "zerocopy_threshold": 0, 00:18:49.111 "tls_version": 0, 00:18:49.111 "enable_ktls": false 00:18:49.111 } 00:18:49.111 } 00:18:49.111 ] 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "subsystem": "vmd", 00:18:49.111 "config": [] 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "subsystem": "accel", 00:18:49.111 "config": [ 00:18:49.111 { 00:18:49.111 "method": "accel_set_options", 00:18:49.111 "params": { 00:18:49.111 "small_cache_size": 128, 00:18:49.111 "large_cache_size": 16, 00:18:49.111 "task_count": 2048, 00:18:49.111 "sequence_count": 2048, 00:18:49.111 "buf_count": 2048 00:18:49.111 } 00:18:49.111 } 00:18:49.111 ] 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "subsystem": "bdev", 00:18:49.111 "config": [ 00:18:49.111 { 00:18:49.111 "method": "bdev_set_options", 00:18:49.111 "params": { 00:18:49.111 "bdev_io_pool_size": 65535, 00:18:49.111 "bdev_io_cache_size": 256, 00:18:49.111 "bdev_auto_examine": true, 00:18:49.111 "iobuf_small_cache_size": 128, 00:18:49.111 "iobuf_large_cache_size": 16 00:18:49.111 } 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "method": "bdev_raid_set_options", 00:18:49.111 "params": { 00:18:49.111 "process_window_size_kb": 1024, 00:18:49.111 "process_max_bandwidth_mb_sec": 0 00:18:49.111 } 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "method": "bdev_iscsi_set_options", 00:18:49.111 "params": { 00:18:49.111 "timeout_sec": 30 00:18:49.111 } 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "method": "bdev_nvme_set_options", 00:18:49.111 "params": { 00:18:49.111 "action_on_timeout": "none", 00:18:49.111 "timeout_us": 0, 00:18:49.111 "timeout_admin_us": 0, 00:18:49.111 "keep_alive_timeout_ms": 10000, 00:18:49.111 "arbitration_burst": 0, 00:18:49.111 "low_priority_weight": 0, 00:18:49.111 "medium_priority_weight": 0, 00:18:49.111 "high_priority_weight": 0, 00:18:49.111 "nvme_adminq_poll_period_us": 10000, 00:18:49.111 "nvme_ioq_poll_period_us": 0, 00:18:49.111 "io_queue_requests": 512, 00:18:49.111 "delay_cmd_submit": true, 00:18:49.111 "transport_retry_count": 4, 00:18:49.111 "bdev_retry_count": 3, 00:18:49.111 "transport_ack_timeout": 0, 00:18:49.111 "ctrlr_loss_timeout_sec": 0, 00:18:49.111 "reconnect_delay_sec": 0, 00:18:49.111 "fast_io_fail_timeout_sec": 0, 00:18:49.111 "disable_auto_failback": false, 00:18:49.111 "generate_uuids": false, 00:18:49.111 "transport_tos": 0, 00:18:49.111 "nvme_error_stat": false, 00:18:49.111 "rdma_srq_size": 0, 00:18:49.111 "io_path_stat": false, 00:18:49.111 "allow_accel_sequence": false, 00:18:49.111 "rdma_max_cq_size": 0, 00:18:49.111 "rdma_cm_event_timeout_ms": 0, 00:18:49.111 "dhchap_digests": [ 00:18:49.111 "sha256", 00:18:49.111 "sha384", 00:18:49.111 "sha512" 00:18:49.111 ], 00:18:49.111 "dhchap_dhgroups": [ 00:18:49.111 "null", 00:18:49.111 "ffdhe2048", 00:18:49.111 "ffdhe3072", 00:18:49.111 "ffdhe4096", 00:18:49.111 "ffdhe6144", 00:18:49.111 "ffdhe8192" 00:18:49.111 ] 00:18:49.111 } 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "method": "bdev_nvme_attach_controller", 00:18:49.111 "params": { 00:18:49.111 "name": "TLSTEST", 00:18:49.111 "trtype": "TCP", 00:18:49.111 "adrfam": "IPv4", 00:18:49.111 "traddr": "10.0.0.2", 00:18:49.111 "trsvcid": "4420", 00:18:49.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.111 "prchk_reftag": false, 00:18:49.111 "prchk_guard": false, 00:18:49.111 "ctrlr_loss_timeout_sec": 0, 00:18:49.111 "reconnect_delay_sec": 0, 00:18:49.111 "fast_io_fail_timeout_sec": 0, 00:18:49.111 "psk": "key0", 00:18:49.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.111 "hdgst": false, 00:18:49.111 "ddgst": false, 00:18:49.111 "multipath": "multipath" 00:18:49.111 } 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "method": "bdev_nvme_set_hotplug", 00:18:49.111 "params": { 00:18:49.111 "period_us": 100000, 00:18:49.111 "enable": false 00:18:49.111 } 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "method": "bdev_wait_for_examine" 00:18:49.111 } 00:18:49.111 ] 00:18:49.111 }, 00:18:49.111 { 00:18:49.111 "subsystem": "nbd", 00:18:49.111 "config": [] 00:18:49.111 } 00:18:49.111 ] 00:18:49.111 }' 00:18:49.111 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71904 00:18:49.111 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71904 ']' 00:18:49.111 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71904 00:18:49.111 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.111 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.111 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71904 00:18:49.369 killing process with pid 71904 00:18:49.369 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.369 00:18:49.369 Latency(us) 00:18:49.369 [2024-12-05T11:02:16.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.369 [2024-12-05T11:02:16.529Z] =================================================================================================================== 00:18:49.370 [2024-12-05T11:02:16.529Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71904' 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71904 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71904 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71853 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71853 ']' 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71853 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71853 00:18:49.370 killing process with pid 71853 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71853' 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71853 00:18:49.370 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71853 00:18:49.938 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:49.938 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:49.938 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.938 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:49.938 "subsystems": [ 00:18:49.938 { 00:18:49.938 "subsystem": "keyring", 00:18:49.938 "config": [ 00:18:49.938 { 00:18:49.938 "method": "keyring_file_add_key", 00:18:49.938 "params": { 00:18:49.938 "name": "key0", 00:18:49.938 "path": "/tmp/tmp.dnyitBhKEg" 00:18:49.938 } 00:18:49.938 } 00:18:49.938 ] 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "subsystem": "iobuf", 00:18:49.938 "config": [ 00:18:49.938 { 00:18:49.938 "method": "iobuf_set_options", 00:18:49.938 "params": { 00:18:49.938 "small_pool_count": 8192, 00:18:49.938 "large_pool_count": 1024, 00:18:49.938 "small_bufsize": 8192, 00:18:49.938 "large_bufsize": 135168, 00:18:49.938 "enable_numa": false 00:18:49.938 } 00:18:49.938 } 00:18:49.938 ] 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "subsystem": "sock", 00:18:49.938 "config": [ 00:18:49.938 { 00:18:49.938 "method": "sock_set_default_impl", 00:18:49.938 "params": { 00:18:49.938 "impl_name": "uring" 00:18:49.938 } 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "method": "sock_impl_set_options", 00:18:49.938 "params": { 00:18:49.938 "impl_name": "ssl", 00:18:49.938 "recv_buf_size": 4096, 00:18:49.938 "send_buf_size": 4096, 00:18:49.938 "enable_recv_pipe": true, 00:18:49.938 "enable_quickack": false, 00:18:49.938 "enable_placement_id": 0, 00:18:49.938 "enable_zerocopy_send_server": true, 00:18:49.938 "enable_zerocopy_send_client": false, 00:18:49.938 "zerocopy_threshold": 0, 00:18:49.938 "tls_version": 0, 00:18:49.938 "enable_ktls": false 00:18:49.938 } 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "method": "sock_impl_set_options", 00:18:49.938 "params": { 00:18:49.938 "impl_name": "posix", 00:18:49.938 "recv_buf_size": 2097152, 00:18:49.938 "send_buf_size": 2097152, 00:18:49.938 "enable_recv_pipe": true, 00:18:49.938 "enable_quickack": false, 00:18:49.938 "enable_placement_id": 0, 00:18:49.938 "enable_zerocopy_send_server": true, 00:18:49.938 "enable_zerocopy_send_client": false, 00:18:49.938 "zerocopy_threshold": 0, 00:18:49.938 "tls_version": 0, 00:18:49.938 "enable_ktls": false 00:18:49.938 } 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "method": "sock_impl_set_options", 00:18:49.938 "params": { 00:18:49.938 "impl_name": "uring", 00:18:49.938 "recv_buf_size": 2097152, 00:18:49.938 "send_buf_size": 2097152, 00:18:49.938 "enable_recv_pipe": true, 00:18:49.938 "enable_quickack": false, 00:18:49.938 "enable_placement_id": 0, 00:18:49.938 "enable_zerocopy_send_server": false, 00:18:49.938 "enable_zerocopy_send_client": false, 00:18:49.938 "zerocopy_threshold": 0, 00:18:49.938 "tls_version": 0, 00:18:49.938 "enable_ktls": false 00:18:49.938 } 00:18:49.938 } 00:18:49.938 ] 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "subsystem": "vmd", 00:18:49.938 "config": [] 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "subsystem": "accel", 00:18:49.938 "config": [ 00:18:49.938 { 00:18:49.938 "method": "accel_set_options", 00:18:49.938 "params": { 00:18:49.938 "small_cache_size": 128, 00:18:49.938 "large_cache_size": 16, 00:18:49.938 "task_count": 2048, 00:18:49.938 "sequence_count": 2048, 00:18:49.938 "buf_count": 2048 00:18:49.938 } 00:18:49.938 } 00:18:49.938 ] 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "subsystem": "bdev", 00:18:49.938 "config": [ 00:18:49.938 { 00:18:49.938 "method": "bdev_set_options", 00:18:49.938 "params": { 00:18:49.938 "bdev_io_pool_size": 65535, 00:18:49.938 "bdev_io_cache_size": 256, 00:18:49.938 "bdev_auto_examine": true, 00:18:49.938 "iobuf_small_cache_size": 128, 00:18:49.938 "iobuf_large_cache_size": 16 00:18:49.938 } 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "method": "bdev_raid_set_options", 00:18:49.938 "params": { 00:18:49.938 "process_window_size_kb": 1024, 00:18:49.938 "process_max_bandwidth_mb_sec": 0 00:18:49.938 } 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "method": "bdev_iscsi_set_options", 00:18:49.938 "params": { 00:18:49.938 "timeout_sec": 30 00:18:49.938 } 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "method": "bdev_nvme_set_options", 00:18:49.938 "params": { 00:18:49.938 "action_on_timeout": "none", 00:18:49.938 "timeout_us": 0, 00:18:49.938 "timeout_admin_us": 0, 00:18:49.938 "keep_alive_timeout_ms": 10000, 00:18:49.938 "arbitration_burst": 0, 00:18:49.938 "low_priority_weight": 0, 00:18:49.938 "medium_priority_weight": 0, 00:18:49.938 "high_priority_weight": 0, 00:18:49.938 "nvme_adminq_poll_period_us": 10000, 00:18:49.938 "nvme_ioq_poll_period_us": 0, 00:18:49.938 "io_queue_requests": 0, 00:18:49.938 "delay_cmd_submit": true, 00:18:49.938 "transport_retry_count": 4, 00:18:49.938 "bdev_retry_count": 3, 00:18:49.938 "transport_ack_timeout": 0, 00:18:49.938 "ctrlr_loss_timeout_sec": 0, 00:18:49.938 "reconnect_delay_sec": 0, 00:18:49.938 "fast_io_fail_timeout_sec": 0, 00:18:49.938 "disable_auto_failback": false, 00:18:49.938 "generate_uuids": false, 00:18:49.938 "transport_tos": 0, 00:18:49.938 "nvme_error_stat": false, 00:18:49.938 "rdma_srq_size": 0, 00:18:49.938 "io_path_stat": false, 00:18:49.938 "allow_accel_sequence": false, 00:18:49.938 "rdma_max_cq_size": 0, 00:18:49.938 "rdma_cm_event_timeout_ms": 0, 00:18:49.938 "dhchap_digests": [ 00:18:49.938 "sha256", 00:18:49.938 "sha384", 00:18:49.938 "sha512" 00:18:49.938 ], 00:18:49.938 "dhchap_dhgroups": [ 00:18:49.938 "null", 00:18:49.938 "ffdhe2048", 00:18:49.938 "ffdhe3072", 00:18:49.938 "ffdhe4096", 00:18:49.938 "ffdhe6144", 00:18:49.938 "ffdhe8192" 00:18:49.938 ] 00:18:49.938 } 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "method": "bdev_nvme_set_hotplug", 00:18:49.938 "params": { 00:18:49.938 "period_us": 100000, 00:18:49.938 "enable": false 00:18:49.938 } 00:18:49.938 }, 00:18:49.938 { 00:18:49.938 "method": "bdev_malloc_create", 00:18:49.938 "params": { 00:18:49.939 "name": "malloc0", 00:18:49.939 "num_blocks": 8192, 00:18:49.939 "block_size": 4096, 00:18:49.939 "physical_block_size": 4096, 00:18:49.939 "uuid": "1a0b89ff-151b-4dfc-a1b1-2ec94d6e20f2", 00:18:49.939 "optimal_io_boundary": 0, 00:18:49.939 "md_size": 0, 00:18:49.939 "dif_type": 0, 00:18:49.939 "dif_is_head_of_md": false, 00:18:49.939 "dif_pi_format": 0 00:18:49.939 } 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "method": "bdev_wait_for_examine" 00:18:49.939 } 00:18:49.939 ] 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "subsystem": "nbd", 00:18:49.939 "config": [] 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "subsystem": "scheduler", 00:18:49.939 "config": [ 00:18:49.939 { 00:18:49.939 "method": "framework_set_scheduler", 00:18:49.939 "params": { 00:18:49.939 "name": "static" 00:18:49.939 } 00:18:49.939 } 00:18:49.939 ] 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "subsystem": "nvmf", 00:18:49.939 "config": [ 00:18:49.939 { 00:18:49.939 "method": "nvmf_set_config", 00:18:49.939 "params": { 00:18:49.939 "discovery_filter": "match_any", 00:18:49.939 "admin_cmd_passthru": { 00:18:49.939 "identify_ctrlr": false 00:18:49.939 }, 00:18:49.939 "dhchap_digests": [ 00:18:49.939 "sha256", 00:18:49.939 "sha384", 00:18:49.939 "sha512" 00:18:49.939 ], 00:18:49.939 "dhchap_dhgroups": [ 00:18:49.939 "null", 00:18:49.939 "ffdhe2048", 00:18:49.939 "ffdhe3072", 00:18:49.939 "ffdhe4096", 00:18:49.939 "ffdhe6144", 00:18:49.939 "ffdhe8192" 00:18:49.939 ] 00:18:49.939 } 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "method": "nvmf_set_max_subsystems", 00:18:49.939 "params": { 00:18:49.939 "max_subsystems": 1024 00:18:49.939 } 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "method": "nvmf_set_crdt", 00:18:49.939 "params": { 00:18:49.939 "crdt1": 0, 00:18:49.939 "crdt2": 0, 00:18:49.939 "crdt3": 0 00:18:49.939 } 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "method": "nvmf_create_transport", 00:18:49.939 "params": { 00:18:49.939 "trtype": "TCP", 00:18:49.939 "max_queue_depth": 128, 00:18:49.939 "max_io_qpairs_per_ctrlr": 127, 00:18:49.939 "in_capsule_data_size": 4096, 00:18:49.939 "max_io_size": 131072, 00:18:49.939 "io_unit_size": 131072, 00:18:49.939 "max_aq_depth": 128, 00:18:49.939 "num_shared_buffers": 511, 00:18:49.939 "buf_cache_size": 4294967295, 00:18:49.939 "dif_insert_or_strip": false, 00:18:49.939 "zcopy": false, 00:18:49.939 "c2h_success": false, 00:18:49.939 "sock_priority": 0, 00:18:49.939 "abort_timeout_sec": 1, 00:18:49.939 "ack_timeout": 0, 00:18:49.939 "data_wr_pool_size": 0 00:18:49.939 } 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "method": "nvmf_create_subsystem", 00:18:49.939 "params": { 00:18:49.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.939 "allow_any_host": false, 00:18:49.939 "serial_number": "SPDK00000000000001", 00:18:49.939 "model_number": "SPDK bdev Controller", 00:18:49.939 "max_namespaces": 10, 00:18:49.939 "min_cntlid": 1, 00:18:49.939 "max_cntlid": 65519, 00:18:49.939 "ana_reporting": false 00:18:49.939 } 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "method": "nvmf_subsystem_add_host", 00:18:49.939 "params": { 00:18:49.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.939 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.939 "psk": "key0" 00:18:49.939 } 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "method": "nvmf_subsystem_add_ns", 00:18:49.939 "params": { 00:18:49.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.939 "namespace": { 00:18:49.939 "nsid": 1, 00:18:49.939 "bdev_name": "malloc0", 00:18:49.939 "nguid": "1A0B89FF151B4DFCA1B12EC94D6E20F2", 00:18:49.939 "uuid": "1a0b89ff-151b-4dfc-a1b1-2ec94d6e20f2", 00:18:49.939 "no_auto_visible": false 00:18:49.939 } 00:18:49.939 } 00:18:49.939 }, 00:18:49.939 { 00:18:49.939 "method": "nvmf_subsystem_add_listener", 00:18:49.939 "params": { 00:18:49.939 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.939 "listen_address": { 00:18:49.939 "trtype": "TCP", 00:18:49.939 "adrfam": "IPv4", 00:18:49.939 "traddr": "10.0.0.2", 00:18:49.939 "trsvcid": "4420" 00:18:49.939 }, 00:18:49.939 "secure_channel": true 00:18:49.939 } 00:18:49.939 } 00:18:49.939 ] 00:18:49.939 } 00:18:49.939 ] 00:18:49.939 }' 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=71958 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 71958 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71958 ']' 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.939 11:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.939 [2024-12-05 11:02:16.864341] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:49.939 [2024-12-05 11:02:16.864426] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.939 [2024-12-05 11:02:17.016523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.939 [2024-12-05 11:02:17.090740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.939 [2024-12-05 11:02:17.090803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.939 [2024-12-05 11:02:17.090813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.939 [2024-12-05 11:02:17.090822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.939 [2024-12-05 11:02:17.090830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.939 [2024-12-05 11:02:17.091269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.198 [2024-12-05 11:02:17.281375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:50.456 [2024-12-05 11:02:17.382096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.456 [2024-12-05 11:02:17.413988] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.456 [2024-12-05 11:02:17.414509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71990 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71990 /var/tmp/bdevperf.sock 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71990 ']' 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.715 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.716 11:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:50.716 "subsystems": [ 00:18:50.716 { 00:18:50.716 "subsystem": "keyring", 00:18:50.716 "config": [ 00:18:50.716 { 00:18:50.716 "method": "keyring_file_add_key", 00:18:50.716 "params": { 00:18:50.716 "name": "key0", 00:18:50.716 "path": "/tmp/tmp.dnyitBhKEg" 00:18:50.716 } 00:18:50.716 } 00:18:50.716 ] 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "subsystem": "iobuf", 00:18:50.716 "config": [ 00:18:50.716 { 00:18:50.716 "method": "iobuf_set_options", 00:18:50.716 "params": { 00:18:50.716 "small_pool_count": 8192, 00:18:50.716 "large_pool_count": 1024, 00:18:50.716 "small_bufsize": 8192, 00:18:50.716 "large_bufsize": 135168, 00:18:50.716 "enable_numa": false 00:18:50.716 } 00:18:50.716 } 00:18:50.716 ] 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "subsystem": "sock", 00:18:50.716 "config": [ 00:18:50.716 { 00:18:50.716 "method": "sock_set_default_impl", 00:18:50.716 "params": { 00:18:50.716 "impl_name": "uring" 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "sock_impl_set_options", 00:18:50.716 "params": { 00:18:50.716 "impl_name": "ssl", 00:18:50.716 "recv_buf_size": 4096, 00:18:50.716 "send_buf_size": 4096, 00:18:50.716 "enable_recv_pipe": true, 00:18:50.716 "enable_quickack": false, 00:18:50.716 "enable_placement_id": 0, 00:18:50.716 "enable_zerocopy_send_server": true, 00:18:50.716 "enable_zerocopy_send_client": false, 00:18:50.716 "zerocopy_threshold": 0, 00:18:50.716 "tls_version": 0, 00:18:50.716 "enable_ktls": false 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "sock_impl_set_options", 00:18:50.716 "params": { 00:18:50.716 "impl_name": "posix", 00:18:50.716 "recv_buf_size": 2097152, 00:18:50.716 "send_buf_size": 2097152, 00:18:50.716 "enable_recv_pipe": true, 00:18:50.716 "enable_quickack": false, 00:18:50.716 "enable_placement_id": 0, 00:18:50.716 "enable_zerocopy_send_server": true, 00:18:50.716 "enable_zerocopy_send_client": false, 00:18:50.716 "zerocopy_threshold": 0, 00:18:50.716 "tls_version": 0, 00:18:50.716 "enable_ktls": false 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "sock_impl_set_options", 00:18:50.716 "params": { 00:18:50.716 "impl_name": "uring", 00:18:50.716 "recv_buf_size": 2097152, 00:18:50.716 "send_buf_size": 2097152, 00:18:50.716 "enable_recv_pipe": true, 00:18:50.716 "enable_quickack": false, 00:18:50.716 "enable_placement_id": 0, 00:18:50.716 "enable_zerocopy_send_server": false, 00:18:50.716 "enable_zerocopy_send_client": false, 00:18:50.716 "zerocopy_threshold": 0, 00:18:50.716 "tls_version": 0, 00:18:50.716 "enable_ktls": false 00:18:50.716 } 00:18:50.716 } 00:18:50.716 ] 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "subsystem": "vmd", 00:18:50.716 "config": [] 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "subsystem": "accel", 00:18:50.716 "config": [ 00:18:50.716 { 00:18:50.716 "method": "accel_set_options", 00:18:50.716 "params": { 00:18:50.716 "small_cache_size": 128, 00:18:50.716 "large_cache_size": 16, 00:18:50.716 "task_count": 2048, 00:18:50.716 "sequence_count": 2048, 00:18:50.716 "buf_count": 2048 00:18:50.716 } 00:18:50.716 } 00:18:50.716 ] 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "subsystem": "bdev", 00:18:50.716 "config": [ 00:18:50.716 { 00:18:50.716 "method": "bdev_set_options", 00:18:50.716 "params": { 00:18:50.716 "bdev_io_pool_size": 65535, 00:18:50.716 "bdev_io_cache_size": 256, 00:18:50.716 "bdev_auto_examine": true, 00:18:50.716 "iobuf_small_cache_size": 128, 00:18:50.716 "iobuf_large_cache_size": 16 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "bdev_raid_set_options", 00:18:50.716 "params": { 00:18:50.716 "process_window_size_kb": 1024, 00:18:50.716 "process_max_bandwidth_mb_sec": 0 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "bdev_iscsi_set_options", 00:18:50.716 "params": { 00:18:50.716 "timeout_sec": 30 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "bdev_nvme_set_options", 00:18:50.716 "params": { 00:18:50.716 "action_on_timeout": "none", 00:18:50.716 "timeout_us": 0, 00:18:50.716 "timeout_admin_us": 0, 00:18:50.716 "keep_alive_timeout_ms": 10000, 00:18:50.716 "arbitration_burst": 0, 00:18:50.716 "low_priority_weight": 0, 00:18:50.716 "medium_priority_weight": 0, 00:18:50.716 "high_priority_weight": 0, 00:18:50.716 "nvme_adminq_poll_period_us": 10000, 00:18:50.716 "nvme_ioq_poll_period_us": 0, 00:18:50.716 "io_queue_requests": 512, 00:18:50.716 "delay_cmd_submit": true, 00:18:50.716 "transport_retry_count": 4, 00:18:50.716 "bdev_retry_count": 3, 00:18:50.716 "transport_ack_timeout": 0, 00:18:50.716 "ctrlr_loss_timeout_sec": 0, 00:18:50.716 "reconnect_delay_sec": 0, 00:18:50.716 "fast_io_fail_timeout_sec": 0, 00:18:50.716 "disable_auto_failback": false, 00:18:50.716 "generate_uuids": false, 00:18:50.716 "transport_tos": 0, 00:18:50.716 "nvme_error_stat": false, 00:18:50.716 "rdma_srq_size": 0, 00:18:50.716 "io_path_stat": false, 00:18:50.716 "allow_accel_sequence": false, 00:18:50.716 "rdma_max_cq_size": 0, 00:18:50.716 "rdma_cm_event_timeout_ms": 0, 00:18:50.716 "dhchap_digests": [ 00:18:50.716 "sha256", 00:18:50.716 "sha384", 00:18:50.716 "sha512" 00:18:50.716 ], 00:18:50.716 "dhchap_dhgroups": [ 00:18:50.716 "null", 00:18:50.716 "ffdhe2048", 00:18:50.716 "ffdhe3072", 00:18:50.716 "ffdhe4096", 00:18:50.716 "ffdhe6144", 00:18:50.716 "ffdhe8192" 00:18:50.716 ] 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "bdev_nvme_attach_controller", 00:18:50.716 "params": { 00:18:50.716 "name": "TLSTEST", 00:18:50.716 "trtype": "TCP", 00:18:50.716 "adrfam": "IPv4", 00:18:50.716 "traddr": "10.0.0.2", 00:18:50.716 "trsvcid": "4420", 00:18:50.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.716 "prchk_reftag": false, 00:18:50.716 "prchk_guard": false, 00:18:50.716 "ctrlr_loss_timeout_sec": 0, 00:18:50.716 "reconnect_delay_sec": 0, 00:18:50.716 "fast_io_fail_timeout_sec": 0, 00:18:50.716 "psk": "key0", 00:18:50.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.716 "hdgst": false, 00:18:50.716 "ddgst": false, 00:18:50.716 "multipath": "multipath" 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "bdev_nvme_set_hotplug", 00:18:50.716 "params": { 00:18:50.716 "period_us": 100000, 00:18:50.716 "enable": false 00:18:50.716 } 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "method": "bdev_wait_for_examine" 00:18:50.716 } 00:18:50.716 ] 00:18:50.716 }, 00:18:50.716 { 00:18:50.716 "subsystem": "nbd", 00:18:50.716 "config": [] 00:18:50.716 } 00:18:50.716 ] 00:18:50.716 }' 00:18:50.717 [2024-12-05 11:02:17.861839] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:18:50.717 [2024-12-05 11:02:17.861929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71990 ] 00:18:50.975 [2024-12-05 11:02:18.011968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.975 [2024-12-05 11:02:18.060012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.234 [2024-12-05 11:02:18.182506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:51.234 [2024-12-05 11:02:18.227414] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.803 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.803 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.803 11:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:51.803 Running I/O for 10 seconds... 00:18:53.673 5044.00 IOPS, 19.70 MiB/s [2024-12-05T11:02:22.205Z] 4958.00 IOPS, 19.37 MiB/s [2024-12-05T11:02:23.139Z] 5025.33 IOPS, 19.63 MiB/s [2024-12-05T11:02:24.169Z] 5088.50 IOPS, 19.88 MiB/s [2024-12-05T11:02:25.106Z] 5131.00 IOPS, 20.04 MiB/s [2024-12-05T11:02:26.040Z] 5200.17 IOPS, 20.31 MiB/s [2024-12-05T11:02:26.976Z] 5270.57 IOPS, 20.59 MiB/s [2024-12-05T11:02:27.910Z] 5236.62 IOPS, 20.46 MiB/s [2024-12-05T11:02:28.846Z] 5230.67 IOPS, 20.43 MiB/s [2024-12-05T11:02:28.846Z] 5242.00 IOPS, 20.48 MiB/s 00:19:01.687 Latency(us) 00:19:01.687 [2024-12-05T11:02:28.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.687 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:01.687 Verification LBA range: start 0x0 length 0x2000 00:19:01.687 TLSTESTn1 : 10.01 5247.17 20.50 0.00 0.00 24356.38 4921.78 27583.02 00:19:01.687 [2024-12-05T11:02:28.846Z] =================================================================================================================== 00:19:01.687 [2024-12-05T11:02:28.846Z] Total : 5247.17 20.50 0.00 0.00 24356.38 4921.78 27583.02 00:19:01.687 { 00:19:01.687 "results": [ 00:19:01.687 { 00:19:01.687 "job": "TLSTESTn1", 00:19:01.687 "core_mask": "0x4", 00:19:01.687 "workload": "verify", 00:19:01.687 "status": "finished", 00:19:01.687 "verify_range": { 00:19:01.687 "start": 0, 00:19:01.687 "length": 8192 00:19:01.687 }, 00:19:01.687 "queue_depth": 128, 00:19:01.687 "io_size": 4096, 00:19:01.687 "runtime": 10.013208, 00:19:01.687 "iops": 5247.169538473584, 00:19:01.687 "mibps": 20.496756009662437, 00:19:01.687 "io_failed": 0, 00:19:01.687 "io_timeout": 0, 00:19:01.687 "avg_latency_us": 24356.375753798395, 00:19:01.687 "min_latency_us": 4921.779919678715, 00:19:01.687 "max_latency_us": 27583.02329317269 00:19:01.687 } 00:19:01.687 ], 00:19:01.687 "core_count": 1 00:19:01.687 } 00:19:01.687 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:01.687 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71990 00:19:01.687 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71990 ']' 00:19:01.687 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71990 00:19:01.687 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.945 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.945 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71990 00:19:01.945 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:01.945 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:01.945 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71990' 00:19:01.945 killing process with pid 71990 00:19:01.945 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71990 00:19:01.945 Received shutdown signal, test time was about 10.000000 seconds 00:19:01.945 00:19:01.945 Latency(us) 00:19:01.945 [2024-12-05T11:02:29.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.945 [2024-12-05T11:02:29.104Z] =================================================================================================================== 00:19:01.945 [2024-12-05T11:02:29.105Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.946 11:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71990 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71958 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71958 ']' 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71958 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71958 00:19:01.946 killing process with pid 71958 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71958' 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71958 00:19:01.946 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71958 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=72126 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 72126 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72126 ']' 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.204 11:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:02.204 [2024-12-05 11:02:29.344154] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:02.204 [2024-12-05 11:02:29.344234] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.461 [2024-12-05 11:02:29.478805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.461 [2024-12-05 11:02:29.531074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.461 [2024-12-05 11:02:29.531130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.461 [2024-12-05 11:02:29.531141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.461 [2024-12-05 11:02:29.531150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.461 [2024-12-05 11:02:29.531157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.461 [2024-12-05 11:02:29.531501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.462 [2024-12-05 11:02:29.574625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.dnyitBhKEg 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dnyitBhKEg 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.396 [2024-12-05 11:02:30.501078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.396 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.654 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.913 [2024-12-05 11:02:30.924463] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.913 [2024-12-05 11:02:30.924695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.913 11:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:04.171 malloc0 00:19:04.171 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:04.429 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72182 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72182 /var/tmp/bdevperf.sock 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72182 ']' 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.688 11:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.947 [2024-12-05 11:02:31.869763] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:04.947 [2024-12-05 11:02:31.869861] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72182 ] 00:19:04.947 [2024-12-05 11:02:32.019129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.947 [2024-12-05 11:02:32.074505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.205 [2024-12-05 11:02:32.116773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:05.771 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.771 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.771 11:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:19:06.029 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:06.287 [2024-12-05 11:02:33.206022] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.287 nvme0n1 00:19:06.287 11:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.287 Running I/O for 1 seconds... 00:19:07.661 5398.00 IOPS, 21.09 MiB/s 00:19:07.661 Latency(us) 00:19:07.661 [2024-12-05T11:02:34.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.661 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.661 Verification LBA range: start 0x0 length 0x2000 00:19:07.661 nvme0n1 : 1.01 5447.28 21.28 0.00 0.00 23314.94 5237.62 17476.27 00:19:07.661 [2024-12-05T11:02:34.820Z] =================================================================================================================== 00:19:07.661 [2024-12-05T11:02:34.820Z] Total : 5447.28 21.28 0.00 0.00 23314.94 5237.62 17476.27 00:19:07.661 { 00:19:07.661 "results": [ 00:19:07.661 { 00:19:07.661 "job": "nvme0n1", 00:19:07.661 "core_mask": "0x2", 00:19:07.661 "workload": "verify", 00:19:07.661 "status": "finished", 00:19:07.661 "verify_range": { 00:19:07.661 "start": 0, 00:19:07.661 "length": 8192 00:19:07.661 }, 00:19:07.661 "queue_depth": 128, 00:19:07.661 "io_size": 4096, 00:19:07.661 "runtime": 1.014452, 00:19:07.661 "iops": 5447.275967714589, 00:19:07.661 "mibps": 21.278421748885112, 00:19:07.661 "io_failed": 0, 00:19:07.661 "io_timeout": 0, 00:19:07.661 "avg_latency_us": 23314.93684153916, 00:19:07.661 "min_latency_us": 5237.6160642570285, 00:19:07.661 "max_latency_us": 17476.266666666666 00:19:07.661 } 00:19:07.661 ], 00:19:07.661 "core_count": 1 00:19:07.661 } 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72182 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72182 ']' 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72182 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72182 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.661 killing process with pid 72182 00:19:07.661 Received shutdown signal, test time was about 1.000000 seconds 00:19:07.661 00:19:07.661 Latency(us) 00:19:07.661 [2024-12-05T11:02:34.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.661 [2024-12-05T11:02:34.820Z] =================================================================================================================== 00:19:07.661 [2024-12-05T11:02:34.820Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72182' 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72182 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72182 00:19:07.661 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72126 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72126 ']' 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72126 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72126 00:19:07.662 killing process with pid 72126 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72126' 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72126 00:19:07.662 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72126 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=72233 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 72233 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72233 ']' 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.921 11:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.921 [2024-12-05 11:02:34.989606] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:07.921 [2024-12-05 11:02:34.990813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.180 [2024-12-05 11:02:35.146455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.180 [2024-12-05 11:02:35.200879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.180 [2024-12-05 11:02:35.201122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.180 [2024-12-05 11:02:35.201141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.180 [2024-12-05 11:02:35.201150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.180 [2024-12-05 11:02:35.201157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.180 [2024-12-05 11:02:35.201499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.180 [2024-12-05 11:02:35.245405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.747 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.747 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.747 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:08.747 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.747 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.006 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.006 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:09.006 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.006 11:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.006 [2024-12-05 11:02:35.966053] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.006 malloc0 00:19:09.006 [2024-12-05 11:02:35.999233] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.006 [2024-12-05 11:02:35.999598] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72265 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72265 /var/tmp/bdevperf.sock 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72265 ']' 00:19:09.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.006 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.006 [2024-12-05 11:02:36.085246] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:09.006 [2024-12-05 11:02:36.085344] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72265 ] 00:19:09.264 [2024-12-05 11:02:36.233486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.264 [2024-12-05 11:02:36.290598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.264 [2024-12-05 11:02:36.333611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:09.833 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.834 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:09.834 11:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dnyitBhKEg 00:19:10.092 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:10.351 [2024-12-05 11:02:37.441300] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.610 nvme0n1 00:19:10.610 11:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:10.610 Running I/O for 1 seconds... 00:19:11.543 5364.00 IOPS, 20.95 MiB/s 00:19:11.543 Latency(us) 00:19:11.543 [2024-12-05T11:02:38.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.543 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:11.543 Verification LBA range: start 0x0 length 0x2000 00:19:11.543 nvme0n1 : 1.01 5425.54 21.19 0.00 0.00 23420.96 4158.51 17897.38 00:19:11.543 [2024-12-05T11:02:38.702Z] =================================================================================================================== 00:19:11.543 [2024-12-05T11:02:38.702Z] Total : 5425.54 21.19 0.00 0.00 23420.96 4158.51 17897.38 00:19:11.543 { 00:19:11.543 "results": [ 00:19:11.543 { 00:19:11.543 "job": "nvme0n1", 00:19:11.543 "core_mask": "0x2", 00:19:11.543 "workload": "verify", 00:19:11.543 "status": "finished", 00:19:11.543 "verify_range": { 00:19:11.543 "start": 0, 00:19:11.543 "length": 8192 00:19:11.543 }, 00:19:11.543 "queue_depth": 128, 00:19:11.543 "io_size": 4096, 00:19:11.543 "runtime": 1.01225, 00:19:11.543 "iops": 5425.537169671524, 00:19:11.543 "mibps": 21.19350456902939, 00:19:11.543 "io_failed": 0, 00:19:11.543 "io_timeout": 0, 00:19:11.543 "avg_latency_us": 23420.960516209045, 00:19:11.543 "min_latency_us": 4158.509236947792, 00:19:11.543 "max_latency_us": 17897.38152610442 00:19:11.543 } 00:19:11.543 ], 00:19:11.543 "core_count": 1 00:19:11.543 } 00:19:11.543 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:11.543 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.543 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.802 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.802 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:11.802 "subsystems": [ 00:19:11.802 { 00:19:11.802 "subsystem": "keyring", 00:19:11.802 "config": [ 00:19:11.802 { 00:19:11.802 "method": "keyring_file_add_key", 00:19:11.802 "params": { 00:19:11.802 "name": "key0", 00:19:11.802 "path": "/tmp/tmp.dnyitBhKEg" 00:19:11.802 } 00:19:11.802 } 00:19:11.802 ] 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "subsystem": "iobuf", 00:19:11.802 "config": [ 00:19:11.802 { 00:19:11.802 "method": "iobuf_set_options", 00:19:11.802 "params": { 00:19:11.802 "small_pool_count": 8192, 00:19:11.802 "large_pool_count": 1024, 00:19:11.802 "small_bufsize": 8192, 00:19:11.802 "large_bufsize": 135168, 00:19:11.802 "enable_numa": false 00:19:11.802 } 00:19:11.802 } 00:19:11.802 ] 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "subsystem": "sock", 00:19:11.802 "config": [ 00:19:11.802 { 00:19:11.802 "method": "sock_set_default_impl", 00:19:11.802 "params": { 00:19:11.802 "impl_name": "uring" 00:19:11.802 } 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "method": "sock_impl_set_options", 00:19:11.802 "params": { 00:19:11.802 "impl_name": "ssl", 00:19:11.802 "recv_buf_size": 4096, 00:19:11.802 "send_buf_size": 4096, 00:19:11.802 "enable_recv_pipe": true, 00:19:11.802 "enable_quickack": false, 00:19:11.802 "enable_placement_id": 0, 00:19:11.802 "enable_zerocopy_send_server": true, 00:19:11.802 "enable_zerocopy_send_client": false, 00:19:11.802 "zerocopy_threshold": 0, 00:19:11.802 "tls_version": 0, 00:19:11.802 "enable_ktls": false 00:19:11.802 } 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "method": "sock_impl_set_options", 00:19:11.802 "params": { 00:19:11.802 "impl_name": "posix", 00:19:11.802 "recv_buf_size": 2097152, 00:19:11.802 "send_buf_size": 2097152, 00:19:11.802 "enable_recv_pipe": true, 00:19:11.802 "enable_quickack": false, 00:19:11.802 "enable_placement_id": 0, 00:19:11.802 "enable_zerocopy_send_server": true, 00:19:11.802 "enable_zerocopy_send_client": false, 00:19:11.802 "zerocopy_threshold": 0, 00:19:11.802 "tls_version": 0, 00:19:11.802 "enable_ktls": false 00:19:11.802 } 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "method": "sock_impl_set_options", 00:19:11.802 "params": { 00:19:11.802 "impl_name": "uring", 00:19:11.802 "recv_buf_size": 2097152, 00:19:11.802 "send_buf_size": 2097152, 00:19:11.802 "enable_recv_pipe": true, 00:19:11.802 "enable_quickack": false, 00:19:11.802 "enable_placement_id": 0, 00:19:11.802 "enable_zerocopy_send_server": false, 00:19:11.802 "enable_zerocopy_send_client": false, 00:19:11.802 "zerocopy_threshold": 0, 00:19:11.802 "tls_version": 0, 00:19:11.802 "enable_ktls": false 00:19:11.802 } 00:19:11.802 } 00:19:11.802 ] 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "subsystem": "vmd", 00:19:11.802 "config": [] 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "subsystem": "accel", 00:19:11.802 "config": [ 00:19:11.802 { 00:19:11.802 "method": "accel_set_options", 00:19:11.802 "params": { 00:19:11.802 "small_cache_size": 128, 00:19:11.802 "large_cache_size": 16, 00:19:11.802 "task_count": 2048, 00:19:11.802 "sequence_count": 2048, 00:19:11.802 "buf_count": 2048 00:19:11.802 } 00:19:11.802 } 00:19:11.802 ] 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "subsystem": "bdev", 00:19:11.802 "config": [ 00:19:11.802 { 00:19:11.802 "method": "bdev_set_options", 00:19:11.802 "params": { 00:19:11.802 "bdev_io_pool_size": 65535, 00:19:11.802 "bdev_io_cache_size": 256, 00:19:11.802 "bdev_auto_examine": true, 00:19:11.802 "iobuf_small_cache_size": 128, 00:19:11.802 "iobuf_large_cache_size": 16 00:19:11.802 } 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "method": "bdev_raid_set_options", 00:19:11.802 "params": { 00:19:11.802 "process_window_size_kb": 1024, 00:19:11.802 "process_max_bandwidth_mb_sec": 0 00:19:11.802 } 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "method": "bdev_iscsi_set_options", 00:19:11.802 "params": { 00:19:11.802 "timeout_sec": 30 00:19:11.802 } 00:19:11.802 }, 00:19:11.802 { 00:19:11.802 "method": "bdev_nvme_set_options", 00:19:11.802 "params": { 00:19:11.802 "action_on_timeout": "none", 00:19:11.802 "timeout_us": 0, 00:19:11.802 "timeout_admin_us": 0, 00:19:11.802 "keep_alive_timeout_ms": 10000, 00:19:11.802 "arbitration_burst": 0, 00:19:11.802 "low_priority_weight": 0, 00:19:11.802 "medium_priority_weight": 0, 00:19:11.802 "high_priority_weight": 0, 00:19:11.802 "nvme_adminq_poll_period_us": 10000, 00:19:11.802 "nvme_ioq_poll_period_us": 0, 00:19:11.802 "io_queue_requests": 0, 00:19:11.802 "delay_cmd_submit": true, 00:19:11.802 "transport_retry_count": 4, 00:19:11.802 "bdev_retry_count": 3, 00:19:11.802 "transport_ack_timeout": 0, 00:19:11.802 "ctrlr_loss_timeout_sec": 0, 00:19:11.802 "reconnect_delay_sec": 0, 00:19:11.802 "fast_io_fail_timeout_sec": 0, 00:19:11.802 "disable_auto_failback": false, 00:19:11.802 "generate_uuids": false, 00:19:11.802 "transport_tos": 0, 00:19:11.802 "nvme_error_stat": false, 00:19:11.802 "rdma_srq_size": 0, 00:19:11.803 "io_path_stat": false, 00:19:11.803 "allow_accel_sequence": false, 00:19:11.803 "rdma_max_cq_size": 0, 00:19:11.803 "rdma_cm_event_timeout_ms": 0, 00:19:11.803 "dhchap_digests": [ 00:19:11.803 "sha256", 00:19:11.803 "sha384", 00:19:11.803 "sha512" 00:19:11.803 ], 00:19:11.803 "dhchap_dhgroups": [ 00:19:11.803 "null", 00:19:11.803 "ffdhe2048", 00:19:11.803 "ffdhe3072", 00:19:11.803 "ffdhe4096", 00:19:11.803 "ffdhe6144", 00:19:11.803 "ffdhe8192" 00:19:11.803 ] 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "bdev_nvme_set_hotplug", 00:19:11.803 "params": { 00:19:11.803 "period_us": 100000, 00:19:11.803 "enable": false 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "bdev_malloc_create", 00:19:11.803 "params": { 00:19:11.803 "name": "malloc0", 00:19:11.803 "num_blocks": 8192, 00:19:11.803 "block_size": 4096, 00:19:11.803 "physical_block_size": 4096, 00:19:11.803 "uuid": "ffe86566-fa5d-4a6f-a0f6-03326f71f8c4", 00:19:11.803 "optimal_io_boundary": 0, 00:19:11.803 "md_size": 0, 00:19:11.803 "dif_type": 0, 00:19:11.803 "dif_is_head_of_md": false, 00:19:11.803 "dif_pi_format": 0 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "bdev_wait_for_examine" 00:19:11.803 } 00:19:11.803 ] 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "subsystem": "nbd", 00:19:11.803 "config": [] 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "subsystem": "scheduler", 00:19:11.803 "config": [ 00:19:11.803 { 00:19:11.803 "method": "framework_set_scheduler", 00:19:11.803 "params": { 00:19:11.803 "name": "static" 00:19:11.803 } 00:19:11.803 } 00:19:11.803 ] 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "subsystem": "nvmf", 00:19:11.803 "config": [ 00:19:11.803 { 00:19:11.803 "method": "nvmf_set_config", 00:19:11.803 "params": { 00:19:11.803 "discovery_filter": "match_any", 00:19:11.803 "admin_cmd_passthru": { 00:19:11.803 "identify_ctrlr": false 00:19:11.803 }, 00:19:11.803 "dhchap_digests": [ 00:19:11.803 "sha256", 00:19:11.803 "sha384", 00:19:11.803 "sha512" 00:19:11.803 ], 00:19:11.803 "dhchap_dhgroups": [ 00:19:11.803 "null", 00:19:11.803 "ffdhe2048", 00:19:11.803 "ffdhe3072", 00:19:11.803 "ffdhe4096", 00:19:11.803 "ffdhe6144", 00:19:11.803 "ffdhe8192" 00:19:11.803 ] 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "nvmf_set_max_subsystems", 00:19:11.803 "params": { 00:19:11.803 "max_subsystems": 1024 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "nvmf_set_crdt", 00:19:11.803 "params": { 00:19:11.803 "crdt1": 0, 00:19:11.803 "crdt2": 0, 00:19:11.803 "crdt3": 0 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "nvmf_create_transport", 00:19:11.803 "params": { 00:19:11.803 "trtype": "TCP", 00:19:11.803 "max_queue_depth": 128, 00:19:11.803 "max_io_qpairs_per_ctrlr": 127, 00:19:11.803 "in_capsule_data_size": 4096, 00:19:11.803 "max_io_size": 131072, 00:19:11.803 "io_unit_size": 131072, 00:19:11.803 "max_aq_depth": 128, 00:19:11.803 "num_shared_buffers": 511, 00:19:11.803 "buf_cache_size": 4294967295, 00:19:11.803 "dif_insert_or_strip": false, 00:19:11.803 "zcopy": false, 00:19:11.803 "c2h_success": false, 00:19:11.803 "sock_priority": 0, 00:19:11.803 "abort_timeout_sec": 1, 00:19:11.803 "ack_timeout": 0, 00:19:11.803 "data_wr_pool_size": 0 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "nvmf_create_subsystem", 00:19:11.803 "params": { 00:19:11.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.803 "allow_any_host": false, 00:19:11.803 "serial_number": "00000000000000000000", 00:19:11.803 "model_number": "SPDK bdev Controller", 00:19:11.803 "max_namespaces": 32, 00:19:11.803 "min_cntlid": 1, 00:19:11.803 "max_cntlid": 65519, 00:19:11.803 "ana_reporting": false 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "nvmf_subsystem_add_host", 00:19:11.803 "params": { 00:19:11.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.803 "host": "nqn.2016-06.io.spdk:host1", 00:19:11.803 "psk": "key0" 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "nvmf_subsystem_add_ns", 00:19:11.803 "params": { 00:19:11.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.803 "namespace": { 00:19:11.803 "nsid": 1, 00:19:11.803 "bdev_name": "malloc0", 00:19:11.803 "nguid": "FFE86566FA5D4A6FA0F603326F71F8C4", 00:19:11.803 "uuid": "ffe86566-fa5d-4a6f-a0f6-03326f71f8c4", 00:19:11.803 "no_auto_visible": false 00:19:11.803 } 00:19:11.803 } 00:19:11.803 }, 00:19:11.803 { 00:19:11.803 "method": "nvmf_subsystem_add_listener", 00:19:11.803 "params": { 00:19:11.803 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.803 "listen_address": { 00:19:11.803 "trtype": "TCP", 00:19:11.803 "adrfam": "IPv4", 00:19:11.803 "traddr": "10.0.0.2", 00:19:11.803 "trsvcid": "4420" 00:19:11.803 }, 00:19:11.803 "secure_channel": false, 00:19:11.803 "sock_impl": "ssl" 00:19:11.803 } 00:19:11.803 } 00:19:11.803 ] 00:19:11.803 } 00:19:11.803 ] 00:19:11.803 }' 00:19:11.803 11:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:12.062 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:12.062 "subsystems": [ 00:19:12.062 { 00:19:12.062 "subsystem": "keyring", 00:19:12.062 "config": [ 00:19:12.062 { 00:19:12.062 "method": "keyring_file_add_key", 00:19:12.062 "params": { 00:19:12.062 "name": "key0", 00:19:12.062 "path": "/tmp/tmp.dnyitBhKEg" 00:19:12.062 } 00:19:12.062 } 00:19:12.062 ] 00:19:12.062 }, 00:19:12.062 { 00:19:12.062 "subsystem": "iobuf", 00:19:12.062 "config": [ 00:19:12.062 { 00:19:12.062 "method": "iobuf_set_options", 00:19:12.062 "params": { 00:19:12.062 "small_pool_count": 8192, 00:19:12.062 "large_pool_count": 1024, 00:19:12.062 "small_bufsize": 8192, 00:19:12.062 "large_bufsize": 135168, 00:19:12.062 "enable_numa": false 00:19:12.062 } 00:19:12.062 } 00:19:12.062 ] 00:19:12.062 }, 00:19:12.062 { 00:19:12.062 "subsystem": "sock", 00:19:12.062 "config": [ 00:19:12.062 { 00:19:12.063 "method": "sock_set_default_impl", 00:19:12.063 "params": { 00:19:12.063 "impl_name": "uring" 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "sock_impl_set_options", 00:19:12.063 "params": { 00:19:12.063 "impl_name": "ssl", 00:19:12.063 "recv_buf_size": 4096, 00:19:12.063 "send_buf_size": 4096, 00:19:12.063 "enable_recv_pipe": true, 00:19:12.063 "enable_quickack": false, 00:19:12.063 "enable_placement_id": 0, 00:19:12.063 "enable_zerocopy_send_server": true, 00:19:12.063 "enable_zerocopy_send_client": false, 00:19:12.063 "zerocopy_threshold": 0, 00:19:12.063 "tls_version": 0, 00:19:12.063 "enable_ktls": false 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "sock_impl_set_options", 00:19:12.063 "params": { 00:19:12.063 "impl_name": "posix", 00:19:12.063 "recv_buf_size": 2097152, 00:19:12.063 "send_buf_size": 2097152, 00:19:12.063 "enable_recv_pipe": true, 00:19:12.063 "enable_quickack": false, 00:19:12.063 "enable_placement_id": 0, 00:19:12.063 "enable_zerocopy_send_server": true, 00:19:12.063 "enable_zerocopy_send_client": false, 00:19:12.063 "zerocopy_threshold": 0, 00:19:12.063 "tls_version": 0, 00:19:12.063 "enable_ktls": false 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "sock_impl_set_options", 00:19:12.063 "params": { 00:19:12.063 "impl_name": "uring", 00:19:12.063 "recv_buf_size": 2097152, 00:19:12.063 "send_buf_size": 2097152, 00:19:12.063 "enable_recv_pipe": true, 00:19:12.063 "enable_quickack": false, 00:19:12.063 "enable_placement_id": 0, 00:19:12.063 "enable_zerocopy_send_server": false, 00:19:12.063 "enable_zerocopy_send_client": false, 00:19:12.063 "zerocopy_threshold": 0, 00:19:12.063 "tls_version": 0, 00:19:12.063 "enable_ktls": false 00:19:12.063 } 00:19:12.063 } 00:19:12.063 ] 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "subsystem": "vmd", 00:19:12.063 "config": [] 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "subsystem": "accel", 00:19:12.063 "config": [ 00:19:12.063 { 00:19:12.063 "method": "accel_set_options", 00:19:12.063 "params": { 00:19:12.063 "small_cache_size": 128, 00:19:12.063 "large_cache_size": 16, 00:19:12.063 "task_count": 2048, 00:19:12.063 "sequence_count": 2048, 00:19:12.063 "buf_count": 2048 00:19:12.063 } 00:19:12.063 } 00:19:12.063 ] 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "subsystem": "bdev", 00:19:12.063 "config": [ 00:19:12.063 { 00:19:12.063 "method": "bdev_set_options", 00:19:12.063 "params": { 00:19:12.063 "bdev_io_pool_size": 65535, 00:19:12.063 "bdev_io_cache_size": 256, 00:19:12.063 "bdev_auto_examine": true, 00:19:12.063 "iobuf_small_cache_size": 128, 00:19:12.063 "iobuf_large_cache_size": 16 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "bdev_raid_set_options", 00:19:12.063 "params": { 00:19:12.063 "process_window_size_kb": 1024, 00:19:12.063 "process_max_bandwidth_mb_sec": 0 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "bdev_iscsi_set_options", 00:19:12.063 "params": { 00:19:12.063 "timeout_sec": 30 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "bdev_nvme_set_options", 00:19:12.063 "params": { 00:19:12.063 "action_on_timeout": "none", 00:19:12.063 "timeout_us": 0, 00:19:12.063 "timeout_admin_us": 0, 00:19:12.063 "keep_alive_timeout_ms": 10000, 00:19:12.063 "arbitration_burst": 0, 00:19:12.063 "low_priority_weight": 0, 00:19:12.063 "medium_priority_weight": 0, 00:19:12.063 "high_priority_weight": 0, 00:19:12.063 "nvme_adminq_poll_period_us": 10000, 00:19:12.063 "nvme_ioq_poll_period_us": 0, 00:19:12.063 "io_queue_requests": 512, 00:19:12.063 "delay_cmd_submit": true, 00:19:12.063 "transport_retry_count": 4, 00:19:12.063 "bdev_retry_count": 3, 00:19:12.063 "transport_ack_timeout": 0, 00:19:12.063 "ctrlr_loss_timeout_sec": 0, 00:19:12.063 "reconnect_delay_sec": 0, 00:19:12.063 "fast_io_fail_timeout_sec": 0, 00:19:12.063 "disable_auto_failback": false, 00:19:12.063 "generate_uuids": false, 00:19:12.063 "transport_tos": 0, 00:19:12.063 "nvme_error_stat": false, 00:19:12.063 "rdma_srq_size": 0, 00:19:12.063 "io_path_stat": false, 00:19:12.063 "allow_accel_sequence": false, 00:19:12.063 "rdma_max_cq_size": 0, 00:19:12.063 "rdma_cm_event_timeout_ms": 0, 00:19:12.063 "dhchap_digests": [ 00:19:12.063 "sha256", 00:19:12.063 "sha384", 00:19:12.063 "sha512" 00:19:12.063 ], 00:19:12.063 "dhchap_dhgroups": [ 00:19:12.063 "null", 00:19:12.063 "ffdhe2048", 00:19:12.063 "ffdhe3072", 00:19:12.063 "ffdhe4096", 00:19:12.063 "ffdhe6144", 00:19:12.063 "ffdhe8192" 00:19:12.063 ] 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "bdev_nvme_attach_controller", 00:19:12.063 "params": { 00:19:12.063 "name": "nvme0", 00:19:12.063 "trtype": "TCP", 00:19:12.063 "adrfam": "IPv4", 00:19:12.063 "traddr": "10.0.0.2", 00:19:12.063 "trsvcid": "4420", 00:19:12.063 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.063 "prchk_reftag": false, 00:19:12.063 "prchk_guard": false, 00:19:12.063 "ctrlr_loss_timeout_sec": 0, 00:19:12.063 "reconnect_delay_sec": 0, 00:19:12.063 "fast_io_fail_timeout_sec": 0, 00:19:12.063 "psk": "key0", 00:19:12.063 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.063 "hdgst": false, 00:19:12.063 "ddgst": false, 00:19:12.063 "multipath": "multipath" 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "bdev_nvme_set_hotplug", 00:19:12.063 "params": { 00:19:12.063 "period_us": 100000, 00:19:12.063 "enable": false 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "bdev_enable_histogram", 00:19:12.063 "params": { 00:19:12.063 "name": "nvme0n1", 00:19:12.063 "enable": true 00:19:12.063 } 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "method": "bdev_wait_for_examine" 00:19:12.063 } 00:19:12.063 ] 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "subsystem": "nbd", 00:19:12.063 "config": [] 00:19:12.063 } 00:19:12.063 ] 00:19:12.063 }' 00:19:12.063 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72265 00:19:12.063 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72265 ']' 00:19:12.063 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72265 00:19:12.063 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:12.063 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.063 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72265 00:19:12.063 killing process with pid 72265 00:19:12.063 Received shutdown signal, test time was about 1.000000 seconds 00:19:12.063 00:19:12.063 Latency(us) 00:19:12.063 [2024-12-05T11:02:39.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.063 [2024-12-05T11:02:39.223Z] =================================================================================================================== 00:19:12.064 [2024-12-05T11:02:39.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.064 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:12.064 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:12.064 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72265' 00:19:12.064 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72265 00:19:12.064 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72265 00:19:12.322 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72233 00:19:12.322 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72233 ']' 00:19:12.322 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72233 00:19:12.322 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:12.322 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.323 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72233 00:19:12.323 killing process with pid 72233 00:19:12.323 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.323 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.323 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72233' 00:19:12.323 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72233 00:19:12.323 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72233 00:19:12.582 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:12.582 "subsystems": [ 00:19:12.582 { 00:19:12.582 "subsystem": "keyring", 00:19:12.582 "config": [ 00:19:12.582 { 00:19:12.582 "method": "keyring_file_add_key", 00:19:12.582 "params": { 00:19:12.582 "name": "key0", 00:19:12.582 "path": "/tmp/tmp.dnyitBhKEg" 00:19:12.582 } 00:19:12.582 } 00:19:12.582 ] 00:19:12.582 }, 00:19:12.582 { 00:19:12.582 "subsystem": "iobuf", 00:19:12.582 "config": [ 00:19:12.582 { 00:19:12.582 "method": "iobuf_set_options", 00:19:12.582 "params": { 00:19:12.582 "small_pool_count": 8192, 00:19:12.582 "large_pool_count": 1024, 00:19:12.582 "small_bufsize": 8192, 00:19:12.582 "large_bufsize": 135168, 00:19:12.582 "enable_numa": false 00:19:12.582 } 00:19:12.582 } 00:19:12.582 ] 00:19:12.582 }, 00:19:12.582 { 00:19:12.582 "subsystem": "sock", 00:19:12.582 "config": [ 00:19:12.582 { 00:19:12.582 "method": "sock_set_default_impl", 00:19:12.582 "params": { 00:19:12.582 "impl_name": "uring" 00:19:12.582 } 00:19:12.582 }, 00:19:12.582 { 00:19:12.582 "method": "sock_impl_set_options", 00:19:12.582 "params": { 00:19:12.582 "impl_name": "ssl", 00:19:12.582 "recv_buf_size": 4096, 00:19:12.582 "send_buf_size": 4096, 00:19:12.582 "enable_recv_pipe": true, 00:19:12.582 "enable_quickack": false, 00:19:12.582 "enable_placement_id": 0, 00:19:12.582 "enable_zerocopy_send_server": true, 00:19:12.582 "enable_zerocopy_send_client": false, 00:19:12.582 "zerocopy_threshold": 0, 00:19:12.582 "tls_version": 0, 00:19:12.582 "enable_ktls": false 00:19:12.582 } 00:19:12.582 }, 00:19:12.582 { 00:19:12.582 "method": "sock_impl_set_options", 00:19:12.582 "params": { 00:19:12.582 "impl_name": "posix", 00:19:12.582 "recv_buf_size": 2097152, 00:19:12.582 "send_buf_size": 2097152, 00:19:12.582 "enable_recv_pipe": true, 00:19:12.582 "enable_quickack": false, 00:19:12.582 "enable_placement_id": 0, 00:19:12.582 "enable_zerocopy_send_server": true, 00:19:12.582 "enable_zerocopy_send_client": false, 00:19:12.582 "zerocopy_threshold": 0, 00:19:12.582 "tls_version": 0, 00:19:12.582 "enable_ktls": false 00:19:12.582 } 00:19:12.582 }, 00:19:12.582 { 00:19:12.582 "method": "sock_impl_set_options", 00:19:12.583 "params": { 00:19:12.583 "impl_name": "uring", 00:19:12.583 "recv_buf_size": 2097152, 00:19:12.583 "send_buf_size": 2097152, 00:19:12.583 "enable_recv_pipe": true, 00:19:12.583 "enable_quickack": false, 00:19:12.583 "enable_placement_id": 0, 00:19:12.583 "enable_zerocopy_send_server": false, 00:19:12.583 "enable_zerocopy_send_client": false, 00:19:12.583 "zerocopy_threshold": 0, 00:19:12.583 "tls_version": 0, 00:19:12.583 "enable_ktls": false 00:19:12.583 } 00:19:12.583 } 00:19:12.583 ] 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "subsystem": "vmd", 00:19:12.583 "config": [] 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "subsystem": "accel", 00:19:12.583 "config": [ 00:19:12.583 { 00:19:12.583 "method": "accel_set_options", 00:19:12.583 "params": { 00:19:12.583 "small_cache_size": 128, 00:19:12.583 "large_cache_size": 16, 00:19:12.583 "task_count": 2048, 00:19:12.583 "sequence_count": 2048, 00:19:12.583 "buf_count": 2048 00:19:12.583 } 00:19:12.583 } 00:19:12.583 ] 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "subsystem": "bdev", 00:19:12.583 "config": [ 00:19:12.583 { 00:19:12.583 "method": "bdev_set_options", 00:19:12.583 "params": { 00:19:12.583 "bdev_io_pool_size": 65535, 00:19:12.583 "bdev_io_cache_size": 256, 00:19:12.583 "bdev_auto_examine": true, 00:19:12.583 "iobuf_small_cache_size": 128, 00:19:12.583 "iobuf_large_cache_size": 16 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "bdev_raid_set_options", 00:19:12.583 "params": { 00:19:12.583 "process_window_size_kb": 1024, 00:19:12.583 "process_max_bandwidth_mb_sec": 0 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "bdev_iscsi_set_options", 00:19:12.583 "params": { 00:19:12.583 "timeout_sec": 30 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "bdev_nvme_set_options", 00:19:12.583 "params": { 00:19:12.583 "action_on_timeout": "none", 00:19:12.583 "timeout_us": 0, 00:19:12.583 "timeout_admin_us": 0, 00:19:12.583 "keep_alive_timeout_ms": 10000, 00:19:12.583 "arbitration_burst": 0, 00:19:12.583 "low_priority_weight": 0, 00:19:12.583 "medium_priority_weight": 0, 00:19:12.583 "high_priority_weight": 0, 00:19:12.583 "nvme_adminq_poll_period_us": 10000, 00:19:12.583 "nvme_ioq_poll_period_us": 0, 00:19:12.583 "io_queue_requests": 0, 00:19:12.583 "delay_cmd_submit": true, 00:19:12.583 "transport_retry_count": 4, 00:19:12.583 "bdev_retry_count": 3, 00:19:12.583 "transport_ack_timeout": 0, 00:19:12.583 "ctrlr_loss_timeout_sec": 0, 00:19:12.583 "reconnect_delay_sec": 0, 00:19:12.583 "fast_io_fail_timeout_sec": 0, 00:19:12.583 "disable_auto_failback": false, 00:19:12.583 "generate_uuids": false, 00:19:12.583 "transport_tos": 0, 00:19:12.583 "nvme_error_stat": false, 00:19:12.583 "rdma_srq_size": 0, 00:19:12.583 "io_path_stat": false, 00:19:12.583 "allow_accel_sequence": false, 00:19:12.583 "rdma_max_cq_size": 0, 00:19:12.583 "rdma_cm_event_timeout_ms": 0, 00:19:12.583 "dhchap_digests": [ 00:19:12.583 "sha256", 00:19:12.583 "sha384", 00:19:12.583 "sha512" 00:19:12.583 ], 00:19:12.583 "dhchap_dhgroups": [ 00:19:12.583 "null", 00:19:12.583 "ffdhe2048", 00:19:12.583 "ffdhe3072", 00:19:12.583 "ffdhe4096", 00:19:12.583 "ffdhe6144", 00:19:12.583 "ffdhe8192" 00:19:12.583 ] 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "bdev_nvme_set_hotplug", 00:19:12.583 "params": { 00:19:12.583 "period_us": 100000, 00:19:12.583 "enable": false 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "bdev_malloc_create", 00:19:12.583 "params": { 00:19:12.583 "name": "malloc0", 00:19:12.583 "num_blocks": 8192, 00:19:12.583 "block_size": 4096, 00:19:12.583 "physical_block_size": 4096, 00:19:12.583 "uuid": "ffe86566-fa5d-4a6f-a0f6-03326f71f8c4", 00:19:12.583 "optimal_io_boundary": 0, 00:19:12.583 "md_size": 0, 00:19:12.583 "dif_type": 0, 00:19:12.583 "dif_is_head_of_md": false, 00:19:12.583 "dif_pi_format": 0 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "bdev_wait_for_examine" 00:19:12.583 } 00:19:12.583 ] 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "subsystem": "nbd", 00:19:12.583 "config": [] 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "subsystem": "scheduler", 00:19:12.583 "config": [ 00:19:12.583 { 00:19:12.583 "method": "framework_set_scheduler", 00:19:12.583 "params": { 00:19:12.583 "name": "static" 00:19:12.583 } 00:19:12.583 } 00:19:12.583 ] 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "subsystem": "nvmf", 00:19:12.583 "config": [ 00:19:12.583 { 00:19:12.583 "method": "nvmf_set_config", 00:19:12.583 "params": { 00:19:12.583 "discovery_filter": "match_any", 00:19:12.583 "admin_cmd_passthru": { 00:19:12.583 "identify_ctrlr": false 00:19:12.583 }, 00:19:12.583 "dhchap_digests": [ 00:19:12.583 "sha256", 00:19:12.583 "sha384", 00:19:12.583 "sha512" 00:19:12.583 ], 00:19:12.583 "dhchap_dhgroups": [ 00:19:12.583 "null", 00:19:12.583 "ffdhe2048", 00:19:12.583 "ffdhe3072", 00:19:12.583 "ffdhe4096", 00:19:12.583 "ffdhe6144", 00:19:12.583 "ffdhe8192" 00:19:12.583 ] 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "nvmf_set_max_subsystems", 00:19:12.583 "params": { 00:19:12.583 "max_subsystems": 1024 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "nvmf_set_crdt", 00:19:12.583 "params": { 00:19:12.583 "crdt1": 0, 00:19:12.583 "crdt2": 0, 00:19:12.583 "crdt3": 0 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "nvmf_create_transport", 00:19:12.583 "params": { 00:19:12.583 "trtype": "TCP", 00:19:12.583 "max_queue_depth": 128, 00:19:12.583 "max_io_qpairs_per_ctrlr": 127, 00:19:12.583 "in_capsule_data_size": 4096, 00:19:12.583 "max_io_size": 131072, 00:19:12.583 "io_unit_size": 131072, 00:19:12.583 "max_aq_depth": 128, 00:19:12.583 "num_shared_buffers": 511, 00:19:12.583 "buf_cache_size": 4294967295, 00:19:12.583 "dif_insert_or_strip": false, 00:19:12.583 "zcopy": false, 00:19:12.583 "c2h_success": false, 00:19:12.583 "sock_priority": 0, 00:19:12.583 "abort_timeout_sec": 1, 00:19:12.583 "ack_timeout": 0, 00:19:12.583 "data_wr_pool_size": 0 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "nvmf_create_subsystem", 00:19:12.583 "params": { 00:19:12.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.583 "allow_any_host": false, 00:19:12.583 "serial_number": "00000000000000000000", 00:19:12.583 "model_number": "SPDK bdev Controller", 00:19:12.583 "max_namespaces": 32, 00:19:12.583 "min_cntlid": 1, 00:19:12.583 "max_cntlid": 65519, 00:19:12.583 "ana_reporting": false 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "nvmf_subsystem_add_host", 00:19:12.583 "params": { 00:19:12.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.583 "host": "nqn.2016-06.io.spdk:host1", 00:19:12.583 "psk": "key0" 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "nvmf_subsystem_add_ns", 00:19:12.583 "params": { 00:19:12.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.583 "namespace": { 00:19:12.583 "nsid": 1, 00:19:12.583 "bdev_name": "malloc0", 00:19:12.583 "nguid": "FFE86566FA5D4A6FA0F603326F71F8C4", 00:19:12.583 "uuid": "ffe86566-fa5d-4a6f-a0f6-03326f71f8c4", 00:19:12.583 "no_auto_visible": false 00:19:12.583 } 00:19:12.583 } 00:19:12.583 }, 00:19:12.583 { 00:19:12.583 "method": "nvmf_subsystem_add_listener", 00:19:12.583 "params": { 00:19:12.583 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.583 "listen_address": { 00:19:12.583 "trtype": "TCP", 00:19:12.583 "adrfam": "IPv4", 00:19:12.583 "traddr": "10.0.0.2", 00:19:12.583 "trsvcid": "4420" 00:19:12.583 }, 00:19:12.583 "secure_channel": false, 00:19:12.583 "sock_impl": "ssl" 00:19:12.583 } 00:19:12.583 } 00:19:12.583 ] 00:19:12.583 } 00:19:12.583 ] 00:19:12.583 }' 00:19:12.583 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:12.583 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:12.583 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.583 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=72320 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 72320 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72320 ']' 00:19:12.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.584 11:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.584 [2024-12-05 11:02:39.637192] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:12.584 [2024-12-05 11:02:39.637268] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.842 [2024-12-05 11:02:39.774477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.843 [2024-12-05 11:02:39.826314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.843 [2024-12-05 11:02:39.826368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.843 [2024-12-05 11:02:39.826379] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.843 [2024-12-05 11:02:39.826388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.843 [2024-12-05 11:02:39.826395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.843 [2024-12-05 11:02:39.826718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.843 [2024-12-05 11:02:39.983232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:13.101 [2024-12-05 11:02:40.056534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.101 [2024-12-05 11:02:40.088445] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.101 [2024-12-05 11:02:40.088659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72352 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72352 /var/tmp/bdevperf.sock 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72352 ']' 00:19:13.669 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:13.669 "subsystems": [ 00:19:13.669 { 00:19:13.669 "subsystem": "keyring", 00:19:13.669 "config": [ 00:19:13.669 { 00:19:13.669 "method": "keyring_file_add_key", 00:19:13.669 "params": { 00:19:13.669 "name": "key0", 00:19:13.669 "path": "/tmp/tmp.dnyitBhKEg" 00:19:13.669 } 00:19:13.669 } 00:19:13.669 ] 00:19:13.669 }, 00:19:13.669 { 00:19:13.669 "subsystem": "iobuf", 00:19:13.669 "config": [ 00:19:13.669 { 00:19:13.669 "method": "iobuf_set_options", 00:19:13.669 "params": { 00:19:13.669 "small_pool_count": 8192, 00:19:13.669 "large_pool_count": 1024, 00:19:13.669 "small_bufsize": 8192, 00:19:13.669 "large_bufsize": 135168, 00:19:13.669 "enable_numa": false 00:19:13.669 } 00:19:13.669 } 00:19:13.669 ] 00:19:13.669 }, 00:19:13.669 { 00:19:13.669 "subsystem": "sock", 00:19:13.669 "config": [ 00:19:13.669 { 00:19:13.669 "method": "sock_set_default_impl", 00:19:13.669 "params": { 00:19:13.669 "impl_name": "uring" 00:19:13.669 } 00:19:13.669 }, 00:19:13.669 { 00:19:13.669 "method": "sock_impl_set_options", 00:19:13.669 "params": { 00:19:13.669 "impl_name": "ssl", 00:19:13.669 "recv_buf_size": 4096, 00:19:13.669 "send_buf_size": 4096, 00:19:13.669 "enable_recv_pipe": true, 00:19:13.669 "enable_quickack": false, 00:19:13.669 "enable_placement_id": 0, 00:19:13.669 "enable_zerocopy_send_server": true, 00:19:13.669 "enable_zerocopy_send_client": false, 00:19:13.669 "zerocopy_threshold": 0, 00:19:13.669 "tls_version": 0, 00:19:13.669 "enable_ktls": false 00:19:13.669 } 00:19:13.669 }, 00:19:13.669 { 00:19:13.669 "method": "sock_impl_set_options", 00:19:13.669 "params": { 00:19:13.669 "impl_name": "posix", 00:19:13.669 "recv_buf_size": 2097152, 00:19:13.670 "send_buf_size": 2097152, 00:19:13.670 "enable_recv_pipe": true, 00:19:13.670 "enable_quickack": false, 00:19:13.670 "enable_placement_id": 0, 00:19:13.670 "enable_zerocopy_send_server": true, 00:19:13.670 "enable_zerocopy_send_client": false, 00:19:13.670 "zerocopy_threshold": 0, 00:19:13.670 "tls_version": 0, 00:19:13.670 "enable_ktls": false 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "sock_impl_set_options", 00:19:13.670 "params": { 00:19:13.670 "impl_name": "uring", 00:19:13.670 "recv_buf_size": 2097152, 00:19:13.670 "send_buf_size": 2097152, 00:19:13.670 "enable_recv_pipe": true, 00:19:13.670 "enable_quickack": false, 00:19:13.670 "enable_placement_id": 0, 00:19:13.670 "enable_zerocopy_send_server": false, 00:19:13.670 "enable_zerocopy_send_client": false, 00:19:13.670 "zerocopy_threshold": 0, 00:19:13.670 "tls_version": 0, 00:19:13.670 "enable_ktls": false 00:19:13.670 } 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "vmd", 00:19:13.670 "config": [] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "accel", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "accel_set_options", 00:19:13.670 "params": { 00:19:13.670 "small_cache_size": 128, 00:19:13.670 "large_cache_size": 16, 00:19:13.670 "task_count": 2048, 00:19:13.670 "sequence_count": 2048, 00:19:13.670 "buf_count": 2048 00:19:13.670 } 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "bdev", 00:19:13.670 "config": [ 00:19:13.670 { 00:19:13.670 "method": "bdev_set_options", 00:19:13.670 "params": { 00:19:13.670 "bdev_io_pool_size": 65535, 00:19:13.670 "bdev_io_cache_size": 256, 00:19:13.670 "bdev_auto_examine": true, 00:19:13.670 "iobuf_small_cache_size": 128, 00:19:13.670 "iobuf_large_cache_size": 16 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_raid_set_options", 00:19:13.670 "params": { 00:19:13.670 "process_window_size_kb": 1024, 00:19:13.670 "process_max_bandwidth_mb_sec": 0 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_iscsi_set_options", 00:19:13.670 "params": { 00:19:13.670 "timeout_sec": 30 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_nvme_set_options", 00:19:13.670 "params": { 00:19:13.670 "action_on_timeout": "none", 00:19:13.670 "timeout_us": 0, 00:19:13.670 "timeout_admin_us": 0, 00:19:13.670 "keep_alive_timeout_ms": 10000, 00:19:13.670 "arbitration_burst": 0, 00:19:13.670 "low_priority_weight": 0, 00:19:13.670 "medium_priority_weight": 0, 00:19:13.670 "high_priority_weight": 0, 00:19:13.670 "nvme_adminq_poll_period_us": 10000, 00:19:13.670 "nvme_ioq_poll_period_us": 0, 00:19:13.670 "io_queue_requests": 512, 00:19:13.670 "delay_cmd_submit": true, 00:19:13.670 "transport_retry_count": 4, 00:19:13.670 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.670 "bdev_retry_count": 3, 00:19:13.670 "transport_ack_timeout": 0, 00:19:13.670 "ctrlr_loss_timeout_sec": 0, 00:19:13.670 "reconnect_delay_sec": 0, 00:19:13.670 "fast_io_fail_timeout_sec": 0, 00:19:13.670 "disable_auto_failback": false, 00:19:13.670 "generate_uuids": false, 00:19:13.670 "transport_tos": 0, 00:19:13.670 "nvme_error_stat": false, 00:19:13.670 "rdma_srq_size": 0, 00:19:13.670 "io_path_stat": false, 00:19:13.670 "allow_accel_sequence": false, 00:19:13.670 "rdma_max_cq_size": 0, 00:19:13.670 "rdma_cm_event_timeout_ms": 0, 00:19:13.670 "dhchap_digests": [ 00:19:13.670 "sha256", 00:19:13.670 "sha384", 00:19:13.670 "sha512" 00:19:13.670 ], 00:19:13.670 "dhchap_dhgroups": [ 00:19:13.670 "null", 00:19:13.670 "ffdhe2048", 00:19:13.670 "ffdhe3072", 00:19:13.670 "ffdhe4096", 00:19:13.670 "ffdhe6144", 00:19:13.670 "ffdhe8192" 00:19:13.670 ] 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_nvme_attach_controller", 00:19:13.670 "params": { 00:19:13.670 "name": "nvme0", 00:19:13.670 "trtype": "TCP", 00:19:13.670 "adrfam": "IPv4", 00:19:13.670 "traddr": "10.0.0.2", 00:19:13.670 "trsvcid": "4420", 00:19:13.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.670 "prchk_reftag": false, 00:19:13.670 "prchk_guard": false, 00:19:13.670 "ctrlr_loss_timeout_sec": 0, 00:19:13.670 "reconnect_delay_sec": 0, 00:19:13.670 "fast_io_fail_timeout_sec": 0, 00:19:13.670 "psk": "key0", 00:19:13.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.670 "hdgst": false, 00:19:13.670 "ddgst": false, 00:19:13.670 "multipath": "multipath" 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_nvme_set_hotplug", 00:19:13.670 "params": { 00:19:13.670 "period_us": 100000, 00:19:13.670 "enable": false 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_enable_histogram", 00:19:13.670 "params": { 00:19:13.670 "name": "nvme0n1", 00:19:13.670 "enable": true 00:19:13.670 } 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "method": "bdev_wait_for_examine" 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }, 00:19:13.670 { 00:19:13.670 "subsystem": "nbd", 00:19:13.670 "config": [] 00:19:13.670 } 00:19:13.670 ] 00:19:13.670 }' 00:19:13.670 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.670 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.670 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.670 11:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.670 [2024-12-05 11:02:40.628301] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:13.670 [2024-12-05 11:02:40.628536] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72352 ] 00:19:13.670 [2024-12-05 11:02:40.782116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.935 [2024-12-05 11:02:40.837187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.935 [2024-12-05 11:02:40.961631] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:13.935 [2024-12-05 11:02:41.005479] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.518 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.518 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:14.518 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:14.518 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:14.777 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.777 11:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.777 Running I/O for 1 seconds... 00:19:15.778 5642.00 IOPS, 22.04 MiB/s 00:19:15.778 Latency(us) 00:19:15.778 [2024-12-05T11:02:42.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.778 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:15.778 Verification LBA range: start 0x0 length 0x2000 00:19:15.778 nvme0n1 : 1.01 5700.88 22.27 0.00 0.00 22296.33 4553.30 18634.33 00:19:15.778 [2024-12-05T11:02:42.938Z] =================================================================================================================== 00:19:15.779 [2024-12-05T11:02:42.938Z] Total : 5700.88 22.27 0.00 0.00 22296.33 4553.30 18634.33 00:19:15.779 { 00:19:15.779 "results": [ 00:19:15.779 { 00:19:15.779 "job": "nvme0n1", 00:19:15.779 "core_mask": "0x2", 00:19:15.779 "workload": "verify", 00:19:15.779 "status": "finished", 00:19:15.779 "verify_range": { 00:19:15.779 "start": 0, 00:19:15.779 "length": 8192 00:19:15.779 }, 00:19:15.779 "queue_depth": 128, 00:19:15.779 "io_size": 4096, 00:19:15.779 "runtime": 1.012125, 00:19:15.779 "iops": 5700.876867975793, 00:19:15.779 "mibps": 22.269050265530442, 00:19:15.779 "io_failed": 0, 00:19:15.779 "io_timeout": 0, 00:19:15.779 "avg_latency_us": 22296.32581765537, 00:19:15.779 "min_latency_us": 4553.304417670683, 00:19:15.779 "max_latency_us": 18634.332530120482 00:19:15.779 } 00:19:15.779 ], 00:19:15.779 "core_count": 1 00:19:15.779 } 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:16.038 11:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:16.038 nvmf_trace.0 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72352 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72352 ']' 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72352 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72352 00:19:16.038 killing process with pid 72352 00:19:16.038 Received shutdown signal, test time was about 1.000000 seconds 00:19:16.038 00:19:16.038 Latency(us) 00:19:16.038 [2024-12-05T11:02:43.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.038 [2024-12-05T11:02:43.197Z] =================================================================================================================== 00:19:16.038 [2024-12-05T11:02:43.197Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:16.038 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:16.039 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72352' 00:19:16.039 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72352 00:19:16.039 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72352 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:16.298 rmmod nvme_tcp 00:19:16.298 rmmod nvme_fabrics 00:19:16.298 rmmod nvme_keyring 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 72320 ']' 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 72320 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72320 ']' 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72320 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72320 00:19:16.298 killing process with pid 72320 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72320' 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72320 00:19:16.298 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72320 00:19:16.556 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:16.556 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@254 -- # local dev 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:16.557 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # continue 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # continue 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@274 -- # iptr 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-save 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-restore 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.X4pbPUgBtZ /tmp/tmp.YBDFgniYQc /tmp/tmp.dnyitBhKEg 00:19:16.816 ************************************ 00:19:16.816 END TEST nvmf_tls 00:19:16.816 ************************************ 00:19:16.816 00:19:16.816 real 1m27.043s 00:19:16.816 user 2m14.177s 00:19:16.816 sys 0m31.855s 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.816 ************************************ 00:19:16.816 START TEST nvmf_fips 00:19:16.816 ************************************ 00:19:16.816 11:02:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:17.102 * Looking for test storage... 00:19:17.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:19:17.102 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.102 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.103 --rc genhtml_branch_coverage=1 00:19:17.103 --rc genhtml_function_coverage=1 00:19:17.103 --rc genhtml_legend=1 00:19:17.103 --rc geninfo_all_blocks=1 00:19:17.103 --rc geninfo_unexecuted_blocks=1 00:19:17.103 00:19:17.103 ' 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.103 --rc genhtml_branch_coverage=1 00:19:17.103 --rc genhtml_function_coverage=1 00:19:17.103 --rc genhtml_legend=1 00:19:17.103 --rc geninfo_all_blocks=1 00:19:17.103 --rc geninfo_unexecuted_blocks=1 00:19:17.103 00:19:17.103 ' 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.103 --rc genhtml_branch_coverage=1 00:19:17.103 --rc genhtml_function_coverage=1 00:19:17.103 --rc genhtml_legend=1 00:19:17.103 --rc geninfo_all_blocks=1 00:19:17.103 --rc geninfo_unexecuted_blocks=1 00:19:17.103 00:19:17.103 ' 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.103 --rc genhtml_branch_coverage=1 00:19:17.103 --rc genhtml_function_coverage=1 00:19:17.103 --rc genhtml_legend=1 00:19:17.103 --rc geninfo_all_blocks=1 00:19:17.103 --rc geninfo_unexecuted_blocks=1 00:19:17.103 00:19:17.103 ' 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.103 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:17.104 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:17.104 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:17.363 Error setting digest 00:19:17.363 40C2DDBBF47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:17.363 40C2DDBBF47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@223 -- # create_target_ns 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:17.363 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # return 0 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up target0 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:17.364 10.0.0.1 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.364 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:17.365 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:19:17.365 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:17.365 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:17.365 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:17.365 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:17.643 10.0.0.2 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up target1 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772163 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:17.643 10.0.0.3 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772164 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:17.643 10.0.0.4 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.643 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator0 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:17.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:19:17.644 00:19:17.644 --- 10.0.0.1 ping statistics --- 00:19:17.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.644 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:17.644 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target0 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target0 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:17.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:19:17.904 00:19:17.904 --- 10.0.0.2 ping statistics --- 00:19:17.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.904 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:17.904 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.904 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.168 ms 00:19:17.904 00:19:17.904 --- 10.0.0.3 ping statistics --- 00:19:17.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.904 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:17.904 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:17.905 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:17.905 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:19:17.905 00:19:17.905 --- 10.0.0.4 ping statistics --- 00:19:17.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.905 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # return 0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target0 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target1 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:17.905 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=72671 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:17.906 11:02:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 72671 00:19:17.906 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72671 ']' 00:19:17.906 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.906 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.906 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.906 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.906 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:18.164 [2024-12-05 11:02:45.070383] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:18.164 [2024-12-05 11:02:45.070621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.164 [2024-12-05 11:02:45.225656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.164 [2024-12-05 11:02:45.274899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.164 [2024-12-05 11:02:45.275132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.164 [2024-12-05 11:02:45.275150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.164 [2024-12-05 11:02:45.275158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.164 [2024-12-05 11:02:45.275165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.164 [2024-12-05 11:02:45.275491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.164 [2024-12-05 11:02:45.317336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:19.095 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.095 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:19.095 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:19.095 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.095 11:02:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.7pK 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.7pK 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.7pK 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.7pK 00:19:19.095 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.353 [2024-12-05 11:02:46.255868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.353 [2024-12-05 11:02:46.271789] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.353 [2024-12-05 11:02:46.271988] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.353 malloc0 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72713 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72713 /var/tmp/bdevperf.sock 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72713 ']' 00:19:19.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.353 11:02:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:19.353 [2024-12-05 11:02:46.413108] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:19.353 [2024-12-05 11:02:46.413483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72713 ] 00:19:19.610 [2024-12-05 11:02:46.560138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.610 [2024-12-05 11:02:46.614643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.610 [2024-12-05 11:02:46.657622] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.174 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.174 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:20.174 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.7pK 00:19:20.462 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.731 [2024-12-05 11:02:47.715267] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.731 TLSTESTn1 00:19:20.731 11:02:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.989 Running I/O for 10 seconds... 00:19:22.859 5503.00 IOPS, 21.50 MiB/s [2024-12-05T11:02:50.954Z] 5543.50 IOPS, 21.65 MiB/s [2024-12-05T11:02:51.892Z] 5578.33 IOPS, 21.79 MiB/s [2024-12-05T11:02:53.268Z] 5580.25 IOPS, 21.80 MiB/s [2024-12-05T11:02:54.206Z] 5573.80 IOPS, 21.77 MiB/s [2024-12-05T11:02:55.143Z] 5587.00 IOPS, 21.82 MiB/s [2024-12-05T11:02:56.076Z] 5597.71 IOPS, 21.87 MiB/s [2024-12-05T11:02:57.009Z] 5601.50 IOPS, 21.88 MiB/s [2024-12-05T11:02:58.001Z] 5594.78 IOPS, 21.85 MiB/s [2024-12-05T11:02:58.001Z] 5568.00 IOPS, 21.75 MiB/s 00:19:30.842 Latency(us) 00:19:30.842 [2024-12-05T11:02:58.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.842 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:30.842 Verification LBA range: start 0x0 length 0x2000 00:19:30.843 TLSTESTn1 : 10.01 5574.04 21.77 0.00 0.00 22928.95 4000.59 18213.22 00:19:30.843 [2024-12-05T11:02:58.002Z] =================================================================================================================== 00:19:30.843 [2024-12-05T11:02:58.002Z] Total : 5574.04 21.77 0.00 0.00 22928.95 4000.59 18213.22 00:19:30.843 { 00:19:30.843 "results": [ 00:19:30.843 { 00:19:30.843 "job": "TLSTESTn1", 00:19:30.843 "core_mask": "0x4", 00:19:30.843 "workload": "verify", 00:19:30.843 "status": "finished", 00:19:30.843 "verify_range": { 00:19:30.843 "start": 0, 00:19:30.843 "length": 8192 00:19:30.843 }, 00:19:30.843 "queue_depth": 128, 00:19:30.843 "io_size": 4096, 00:19:30.843 "runtime": 10.011775, 00:19:30.843 "iops": 5574.0365719365445, 00:19:30.843 "mibps": 21.773580359127127, 00:19:30.843 "io_failed": 0, 00:19:30.843 "io_timeout": 0, 00:19:30.843 "avg_latency_us": 22928.945859861335, 00:19:30.843 "min_latency_us": 4000.5911646586346, 00:19:30.843 "max_latency_us": 18213.21767068273 00:19:30.843 } 00:19:30.843 ], 00:19:30.843 "core_count": 1 00:19:30.843 } 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:30.843 11:02:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:30.843 nvmf_trace.0 00:19:31.122 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:31.122 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72713 00:19:31.122 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72713 ']' 00:19:31.122 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72713 00:19:31.122 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:31.122 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72713 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72713' 00:19:31.123 killing process with pid 72713 00:19:31.123 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.123 00:19:31.123 Latency(us) 00:19:31.123 [2024-12-05T11:02:58.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.123 [2024-12-05T11:02:58.282Z] =================================================================================================================== 00:19:31.123 [2024-12-05T11:02:58.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72713 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72713 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:31.123 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:31.382 rmmod nvme_tcp 00:19:31.382 rmmod nvme_fabrics 00:19:31.382 rmmod nvme_keyring 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 72671 ']' 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 72671 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72671 ']' 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72671 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72671 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72671' 00:19:31.382 killing process with pid 72671 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72671 00:19:31.382 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72671 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@254 -- # local dev 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # continue 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # continue 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@274 -- # iptr 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-save 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-restore 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.7pK 00:19:31.641 ************************************ 00:19:31.641 END TEST nvmf_fips 00:19:31.641 ************************************ 00:19:31.641 00:19:31.641 real 0m14.884s 00:19:31.641 user 0m19.711s 00:19:31.641 sys 0m6.302s 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.641 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:31.920 11:02:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:31.920 11:02:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.920 11:02:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.920 11:02:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.920 ************************************ 00:19:31.920 START TEST nvmf_control_msg_list 00:19:31.920 ************************************ 00:19:31.920 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:31.920 * Looking for test storage... 00:19:31.920 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:31.920 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:31.920 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:31.920 11:02:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:31.920 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:32.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.188 --rc genhtml_branch_coverage=1 00:19:32.188 --rc genhtml_function_coverage=1 00:19:32.188 --rc genhtml_legend=1 00:19:32.188 --rc geninfo_all_blocks=1 00:19:32.188 --rc geninfo_unexecuted_blocks=1 00:19:32.188 00:19:32.188 ' 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:32.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.188 --rc genhtml_branch_coverage=1 00:19:32.188 --rc genhtml_function_coverage=1 00:19:32.188 --rc genhtml_legend=1 00:19:32.188 --rc geninfo_all_blocks=1 00:19:32.188 --rc geninfo_unexecuted_blocks=1 00:19:32.188 00:19:32.188 ' 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:32.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.188 --rc genhtml_branch_coverage=1 00:19:32.188 --rc genhtml_function_coverage=1 00:19:32.188 --rc genhtml_legend=1 00:19:32.188 --rc geninfo_all_blocks=1 00:19:32.188 --rc geninfo_unexecuted_blocks=1 00:19:32.188 00:19:32.188 ' 00:19:32.188 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:32.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.188 --rc genhtml_branch_coverage=1 00:19:32.188 --rc genhtml_function_coverage=1 00:19:32.189 --rc genhtml_legend=1 00:19:32.189 --rc geninfo_all_blocks=1 00:19:32.189 --rc geninfo_unexecuted_blocks=1 00:19:32.189 00:19:32.189 ' 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:32.189 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@223 -- # create_target_ns 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:32.189 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # return 0 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up target0 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:32.190 10.0.0.1 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:32.190 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:32.191 10.0.0.2 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:32.191 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.450 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:32.450 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up target1 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772163 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:32.451 10.0.0.3 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772164 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:32.451 10.0.0.4 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:32.451 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:32.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:19:32.452 00:19:32.452 --- 10.0.0.1 ping statistics --- 00:19:32.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.452 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target0 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:32.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:19:32.452 00:19:32.452 --- 10.0.0.2 ping statistics --- 00:19:32.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.452 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:32.452 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:32.452 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:19:32.452 00:19:32.452 --- 10.0.0.3 ping statistics --- 00:19:32.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.452 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:19:32.452 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target1 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target1 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:32.453 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:32.453 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.132 ms 00:19:32.453 00:19:32.453 --- 10.0.0.4 ping statistics --- 00:19:32.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.453 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # return 0 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:32.453 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target0 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:19:32.712 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target1 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target1 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=73099 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 73099 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73099 ']' 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.713 11:02:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:32.713 [2024-12-05 11:02:59.801029] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:32.713 [2024-12-05 11:02:59.801093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.970 [2024-12-05 11:02:59.968383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.970 [2024-12-05 11:03:00.030457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.970 [2024-12-05 11:03:00.030515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.970 [2024-12-05 11:03:00.030525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.970 [2024-12-05 11:03:00.030534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.970 [2024-12-05 11:03:00.030541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.970 [2024-12-05 11:03:00.030839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.970 [2024-12-05 11:03:00.072967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.903 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.903 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:33.903 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:33.903 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.903 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.904 [2024-12-05 11:03:00.764006] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.904 Malloc0 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:33.904 [2024-12-05 11:03:00.817021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73131 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73132 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73133 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73131 00:19:33.904 11:03:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:33.904 [2024-12-05 11:03:01.017314] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:33.904 [2024-12-05 11:03:01.017484] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:33.904 [2024-12-05 11:03:01.017568] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:35.278 Initializing NVMe Controllers 00:19:35.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:35.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:35.278 Initialization complete. Launching workers. 00:19:35.278 ======================================================== 00:19:35.278 Latency(us) 00:19:35.278 Device Information : IOPS MiB/s Average min max 00:19:35.278 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4535.00 17.71 220.28 147.75 750.26 00:19:35.278 ======================================================== 00:19:35.278 Total : 4535.00 17.71 220.28 147.75 750.26 00:19:35.278 00:19:35.278 Initializing NVMe Controllers 00:19:35.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:35.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:35.278 Initialization complete. Launching workers. 00:19:35.278 ======================================================== 00:19:35.278 Latency(us) 00:19:35.278 Device Information : IOPS MiB/s Average min max 00:19:35.278 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4540.00 17.73 220.06 148.54 747.85 00:19:35.278 ======================================================== 00:19:35.278 Total : 4540.00 17.73 220.06 148.54 747.85 00:19:35.278 00:19:35.278 Initializing NVMe Controllers 00:19:35.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:35.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:35.278 Initialization complete. Launching workers. 00:19:35.278 ======================================================== 00:19:35.278 Latency(us) 00:19:35.278 Device Information : IOPS MiB/s Average min max 00:19:35.278 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4518.00 17.65 221.08 150.16 778.94 00:19:35.278 ======================================================== 00:19:35.278 Total : 4518.00 17.65 221.08 150.16 778.94 00:19:35.278 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73132 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73133 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:35.278 rmmod nvme_tcp 00:19:35.278 rmmod nvme_fabrics 00:19:35.278 rmmod nvme_keyring 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 73099 ']' 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 73099 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73099 ']' 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73099 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73099 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73099' 00:19:35.278 killing process with pid 73099 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73099 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73099 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@254 -- # local dev 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:35.278 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:35.540 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # continue 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # continue 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@274 -- # iptr 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-save 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-restore 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:35.541 00:19:35.541 real 0m3.797s 00:19:35.541 user 0m5.503s 00:19:35.541 sys 0m1.834s 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:35.541 ************************************ 00:19:35.541 END TEST nvmf_control_msg_list 00:19:35.541 ************************************ 00:19:35.541 11:03:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.800 ************************************ 00:19:35.800 START TEST nvmf_wait_for_buf 00:19:35.800 ************************************ 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:35.800 * Looking for test storage... 00:19:35.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:35.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.800 --rc genhtml_branch_coverage=1 00:19:35.800 --rc genhtml_function_coverage=1 00:19:35.800 --rc genhtml_legend=1 00:19:35.800 --rc geninfo_all_blocks=1 00:19:35.800 --rc geninfo_unexecuted_blocks=1 00:19:35.800 00:19:35.800 ' 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:35.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.800 --rc genhtml_branch_coverage=1 00:19:35.800 --rc genhtml_function_coverage=1 00:19:35.800 --rc genhtml_legend=1 00:19:35.800 --rc geninfo_all_blocks=1 00:19:35.800 --rc geninfo_unexecuted_blocks=1 00:19:35.800 00:19:35.800 ' 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:35.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.800 --rc genhtml_branch_coverage=1 00:19:35.800 --rc genhtml_function_coverage=1 00:19:35.800 --rc genhtml_legend=1 00:19:35.800 --rc geninfo_all_blocks=1 00:19:35.800 --rc geninfo_unexecuted_blocks=1 00:19:35.800 00:19:35.800 ' 00:19:35.800 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:35.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.800 --rc genhtml_branch_coverage=1 00:19:35.800 --rc genhtml_function_coverage=1 00:19:35.800 --rc genhtml_legend=1 00:19:35.800 --rc geninfo_all_blocks=1 00:19:35.800 --rc geninfo_unexecuted_blocks=1 00:19:35.800 00:19:35.801 ' 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.801 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.061 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:36.062 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@223 -- # create_target_ns 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # return 0 00:19:36.062 11:03:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up target0 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:36.062 10.0.0.1 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:36.062 10.0.0.2 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:36.062 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:36.063 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:36.320 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:36.320 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:36.320 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.320 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:36.320 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:36.320 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:36.320 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up target1 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772163 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:36.321 10.0.0.3 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772164 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:36.321 10.0.0.4 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:36.321 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:36.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:19:36.322 00:19:36.322 --- 10.0.0.1 ping statistics --- 00:19:36.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.322 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target0 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:36.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:19:36.322 00:19:36.322 --- 10.0.0.2 ping statistics --- 00:19:36.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.322 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:36.322 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:36.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:36.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:19:36.581 00:19:36.581 --- 10.0.0.3 ping statistics --- 00:19:36.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.581 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:36.581 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:36.581 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:19:36.581 00:19:36.581 --- 10.0.0.4 ping statistics --- 00:19:36.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.581 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # return 0 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator0 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:36.581 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target0 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target0 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target1 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target1 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=73370 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:36.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 73370 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73370 ']' 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:36.582 11:03:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.582 [2024-12-05 11:03:03.705781] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:36.582 [2024-12-05 11:03:03.705857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.841 [2024-12-05 11:03:03.862116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.841 [2024-12-05 11:03:03.920624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.841 [2024-12-05 11:03:03.920889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.841 [2024-12-05 11:03:03.920909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.841 [2024-12-05 11:03:03.920920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.841 [2024-12-05 11:03:03.920930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.841 [2024-12-05 11:03:03.921285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.775 [2024-12-05 11:03:04.815527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.775 Malloc0 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.775 [2024-12-05 11:03:04.875750] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.775 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:37.776 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.776 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.776 [2024-12-05 11:03:04.903809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.776 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.776 11:03:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:38.033 [2024-12-05 11:03:05.115409] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:39.408 Initializing NVMe Controllers 00:19:39.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:39.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:39.408 Initialization complete. Launching workers. 00:19:39.408 ======================================================== 00:19:39.408 Latency(us) 00:19:39.408 Device Information : IOPS MiB/s Average min max 00:19:39.408 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 498.18 62.27 8029.26 5034.34 10957.85 00:19:39.408 ======================================================== 00:19:39.408 Total : 498.18 62.27 8029.26 5034.34 10957.85 00:19:39.408 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:39.408 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:39.408 rmmod nvme_tcp 00:19:39.408 rmmod nvme_fabrics 00:19:39.408 rmmod nvme_keyring 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 73370 ']' 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 73370 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73370 ']' 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73370 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73370 00:19:39.763 killing process with pid 73370 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73370' 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73370 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73370 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@254 -- # local dev 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:39.763 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # continue 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # continue 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@274 -- # iptr 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-save 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-restore 00:19:40.056 ************************************ 00:19:40.056 END TEST nvmf_wait_for_buf 00:19:40.056 ************************************ 00:19:40.056 00:19:40.056 real 0m4.273s 00:19:40.056 user 0m3.646s 00:19:40.056 sys 0m1.175s 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.056 11:03:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:40.056 ************************************ 00:19:40.056 START TEST nvmf_nsid 00:19:40.056 ************************************ 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:40.056 * Looking for test storage... 00:19:40.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:19:40.056 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:40.316 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:40.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.317 --rc genhtml_branch_coverage=1 00:19:40.317 --rc genhtml_function_coverage=1 00:19:40.317 --rc genhtml_legend=1 00:19:40.317 --rc geninfo_all_blocks=1 00:19:40.317 --rc geninfo_unexecuted_blocks=1 00:19:40.317 00:19:40.317 ' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:40.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.317 --rc genhtml_branch_coverage=1 00:19:40.317 --rc genhtml_function_coverage=1 00:19:40.317 --rc genhtml_legend=1 00:19:40.317 --rc geninfo_all_blocks=1 00:19:40.317 --rc geninfo_unexecuted_blocks=1 00:19:40.317 00:19:40.317 ' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:40.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.317 --rc genhtml_branch_coverage=1 00:19:40.317 --rc genhtml_function_coverage=1 00:19:40.317 --rc genhtml_legend=1 00:19:40.317 --rc geninfo_all_blocks=1 00:19:40.317 --rc geninfo_unexecuted_blocks=1 00:19:40.317 00:19:40.317 ' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:40.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.317 --rc genhtml_branch_coverage=1 00:19:40.317 --rc genhtml_function_coverage=1 00:19:40.317 --rc genhtml_legend=1 00:19:40.317 --rc geninfo_all_blocks=1 00:19:40.317 --rc geninfo_unexecuted_blocks=1 00:19:40.317 00:19:40.317 ' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:40.317 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:40.317 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@223 -- # create_target_ns 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # return 0 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up target0 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:40.318 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:40.319 10.0.0.1 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:40.319 10.0.0.2 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:40.319 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up target1 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:40.580 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772163 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:40.581 10.0.0.3 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772164 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:40.581 10.0.0.4 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:40.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:19:40.581 00:19:40.581 --- 10.0.0.1 ping statistics --- 00:19:40.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.581 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:40.581 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target0 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target0 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:40.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:19:40.582 00:19:40.582 --- 10.0.0.2 ping statistics --- 00:19:40.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.582 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:40.582 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:40.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:19:40.842 00:19:40.842 --- 10.0.0.3 ping statistics --- 00:19:40.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.842 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target1 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:40.842 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:40.843 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:40.843 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.134 ms 00:19:40.843 00:19:40.843 --- 10.0.0.4 ping statistics --- 00:19:40.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.843 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # return 0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target0 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target1 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:40.843 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=73645 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 73645 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73645 ']' 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:40.844 11:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:40.844 [2024-12-05 11:03:07.981214] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:40.844 [2024-12-05 11:03:07.981388] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.103 [2024-12-05 11:03:08.134626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.103 [2024-12-05 11:03:08.182267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.103 [2024-12-05 11:03:08.182329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.103 [2024-12-05 11:03:08.182338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.103 [2024-12-05 11:03:08.182347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.103 [2024-12-05 11:03:08.182355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.103 [2024-12-05 11:03:08.182630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.103 [2024-12-05 11:03:08.224343] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73677 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=8562b82c-0830-492d-acd2-43c15e12be01 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=b0e4f2cd-a184-4b85-9a2f-095832b99bd4 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=499acaad-06be-4dc3-84f9-e3e29eb5a2cd 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.041 11:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:42.041 null0 00:19:42.041 [2024-12-05 11:03:08.960326] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:42.041 [2024-12-05 11:03:08.960392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73677 ] 00:19:42.041 null1 00:19:42.041 null2 00:19:42.041 [2024-12-05 11:03:08.977941] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.041 [2024-12-05 11:03:09.002025] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.041 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.041 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73677 /var/tmp/tgt2.sock 00:19:42.041 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73677 ']' 00:19:42.041 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:42.041 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:42.041 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:42.041 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.041 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:42.041 [2024-12-05 11:03:09.110439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.041 [2024-12-05 11:03:09.161854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.300 [2024-12-05 11:03:09.217761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.300 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.300 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:42.300 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:42.868 [2024-12-05 11:03:09.722970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.868 [2024-12-05 11:03:09.739048] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:42.868 nvme0n1 nvme0n2 00:19:42.868 nvme1n1 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:19:42.868 11:03:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:19:43.805 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:43.805 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 8562b82c-0830-492d-acd2-43c15e12be01 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:44.064 11:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8562b82c0830492dacd243c15e12be01 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8562B82C0830492DACD243C15E12BE01 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8562B82C0830492DACD243C15E12BE01 == \8\5\6\2\B\8\2\C\0\8\3\0\4\9\2\D\A\C\D\2\4\3\C\1\5\E\1\2\B\E\0\1 ]] 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid b0e4f2cd-a184-4b85-9a2f-095832b99bd4 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b0e4f2cda1844b859a2f095832b99bd4 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B0E4F2CDA1844B859A2F095832B99BD4 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ B0E4F2CDA1844B859A2F095832B99BD4 == \B\0\E\4\F\2\C\D\A\1\8\4\4\B\8\5\9\A\2\F\0\9\5\8\3\2\B\9\9\B\D\4 ]] 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:19:44.064 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:44.065 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:44.065 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 499acaad-06be-4dc3-84f9-e3e29eb5a2cd 00:19:44.065 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:19:44.065 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:44.065 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:44.065 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:44.065 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=499acaad06be4dc384f9e3e29eb5a2cd 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 499ACAAD06BE4DC384F9E3E29EB5A2CD 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 499ACAAD06BE4DC384F9E3E29EB5A2CD == \4\9\9\A\C\A\A\D\0\6\B\E\4\D\C\3\8\4\F\9\E\3\E\2\9\E\B\5\A\2\C\D ]] 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73677 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73677 ']' 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73677 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73677 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73677' 00:19:44.326 killing process with pid 73677 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73677 00:19:44.326 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73677 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:44.906 rmmod nvme_tcp 00:19:44.906 rmmod nvme_fabrics 00:19:44.906 rmmod nvme_keyring 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 73645 ']' 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 73645 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73645 ']' 00:19:44.906 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73645 00:19:44.907 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:44.907 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.907 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73645 00:19:44.907 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:44.907 killing process with pid 73645 00:19:44.907 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:44.907 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73645' 00:19:44.907 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73645 00:19:44.907 11:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73645 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@254 -- # local dev 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # continue 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # continue 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@274 -- # iptr 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-save 00:19:45.165 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-restore 00:19:45.424 00:19:45.424 real 0m5.268s 00:19:45.424 user 0m6.990s 00:19:45.424 sys 0m2.276s 00:19:45.424 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.424 11:03:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:45.424 ************************************ 00:19:45.424 END TEST nvmf_nsid 00:19:45.424 ************************************ 00:19:45.424 11:03:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:45.424 ************************************ 00:19:45.424 END TEST nvmf_target_extra 00:19:45.424 ************************************ 00:19:45.424 00:19:45.424 real 4m53.916s 00:19:45.424 user 9m43.716s 00:19:45.424 sys 1m22.857s 00:19:45.424 11:03:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.424 11:03:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.424 11:03:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:45.424 11:03:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.424 11:03:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.424 11:03:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:45.424 ************************************ 00:19:45.424 START TEST nvmf_host 00:19:45.424 ************************************ 00:19:45.424 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:45.424 * Looking for test storage... 00:19:45.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:45.424 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:45.424 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:45.424 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:45.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.683 --rc genhtml_branch_coverage=1 00:19:45.683 --rc genhtml_function_coverage=1 00:19:45.683 --rc genhtml_legend=1 00:19:45.683 --rc geninfo_all_blocks=1 00:19:45.683 --rc geninfo_unexecuted_blocks=1 00:19:45.683 00:19:45.683 ' 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:45.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.683 --rc genhtml_branch_coverage=1 00:19:45.683 --rc genhtml_function_coverage=1 00:19:45.683 --rc genhtml_legend=1 00:19:45.683 --rc geninfo_all_blocks=1 00:19:45.683 --rc geninfo_unexecuted_blocks=1 00:19:45.683 00:19:45.683 ' 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:45.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.683 --rc genhtml_branch_coverage=1 00:19:45.683 --rc genhtml_function_coverage=1 00:19:45.683 --rc genhtml_legend=1 00:19:45.683 --rc geninfo_all_blocks=1 00:19:45.683 --rc geninfo_unexecuted_blocks=1 00:19:45.683 00:19:45.683 ' 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:45.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.683 --rc genhtml_branch_coverage=1 00:19:45.683 --rc genhtml_function_coverage=1 00:19:45.683 --rc genhtml_legend=1 00:19:45.683 --rc geninfo_all_blocks=1 00:19:45.683 --rc geninfo_unexecuted_blocks=1 00:19:45.683 00:19:45.683 ' 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.683 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:45.684 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.684 ************************************ 00:19:45.684 START TEST nvmf_identify 00:19:45.684 ************************************ 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:45.684 * Looking for test storage... 00:19:45.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:45.684 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:45.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.944 --rc genhtml_branch_coverage=1 00:19:45.944 --rc genhtml_function_coverage=1 00:19:45.944 --rc genhtml_legend=1 00:19:45.944 --rc geninfo_all_blocks=1 00:19:45.944 --rc geninfo_unexecuted_blocks=1 00:19:45.944 00:19:45.944 ' 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:45.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.944 --rc genhtml_branch_coverage=1 00:19:45.944 --rc genhtml_function_coverage=1 00:19:45.944 --rc genhtml_legend=1 00:19:45.944 --rc geninfo_all_blocks=1 00:19:45.944 --rc geninfo_unexecuted_blocks=1 00:19:45.944 00:19:45.944 ' 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:45.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.944 --rc genhtml_branch_coverage=1 00:19:45.944 --rc genhtml_function_coverage=1 00:19:45.944 --rc genhtml_legend=1 00:19:45.944 --rc geninfo_all_blocks=1 00:19:45.944 --rc geninfo_unexecuted_blocks=1 00:19:45.944 00:19:45.944 ' 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:45.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.944 --rc genhtml_branch_coverage=1 00:19:45.944 --rc genhtml_function_coverage=1 00:19:45.944 --rc genhtml_legend=1 00:19:45.944 --rc geninfo_all_blocks=1 00:19:45.944 --rc geninfo_unexecuted_blocks=1 00:19:45.944 00:19:45.944 ' 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.944 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:45.945 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@223 -- # create_target_ns 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:45.945 11:03:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # return 0 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:45.945 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up target0 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:45.946 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:46.206 10.0.0.1 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:46.206 10.0.0.2 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:46.206 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up target1 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772163 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:46.207 10.0.0.3 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772164 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:46.207 10.0.0.4 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.207 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:46.467 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator0 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:46.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:19:46.468 00:19:46.468 --- 10.0.0.1 ping statistics --- 00:19:46.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.468 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target0 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target0 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:46.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:19:46.468 00:19:46.468 --- 10.0.0.2 ping statistics --- 00:19:46.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.468 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:46.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:46.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:19:46.468 00:19:46.468 --- 10.0.0.3 ping statistics --- 00:19:46.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.468 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:46.468 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:46.468 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:19:46.468 00:19:46.468 --- 10.0.0.4 ping statistics --- 00:19:46.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.468 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:46.468 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # return 0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target0 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target1 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:46.469 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74033 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74033 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74033 ']' 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.729 11:03:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:46.729 [2024-12-05 11:03:13.699069] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:46.729 [2024-12-05 11:03:13.699132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.729 [2024-12-05 11:03:13.853430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.988 [2024-12-05 11:03:13.907382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.988 [2024-12-05 11:03:13.907435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.988 [2024-12-05 11:03:13.907446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.988 [2024-12-05 11:03:13.907455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.988 [2024-12-05 11:03:13.907462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.988 [2024-12-05 11:03:13.908487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.988 [2024-12-05 11:03:13.908602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.988 [2024-12-05 11:03:13.908685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.988 [2024-12-05 11:03:13.908689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.988 [2024-12-05 11:03:13.950728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.555 [2024-12-05 11:03:14.598448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.555 Malloc0 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.555 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.816 [2024-12-05 11:03:14.725274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.816 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:47.816 [ 00:19:47.816 { 00:19:47.816 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:47.816 "subtype": "Discovery", 00:19:47.816 "listen_addresses": [ 00:19:47.816 { 00:19:47.816 "trtype": "TCP", 00:19:47.816 "adrfam": "IPv4", 00:19:47.816 "traddr": "10.0.0.2", 00:19:47.816 "trsvcid": "4420" 00:19:47.817 } 00:19:47.817 ], 00:19:47.817 "allow_any_host": true, 00:19:47.817 "hosts": [] 00:19:47.817 }, 00:19:47.817 { 00:19:47.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.817 "subtype": "NVMe", 00:19:47.817 "listen_addresses": [ 00:19:47.817 { 00:19:47.817 "trtype": "TCP", 00:19:47.817 "adrfam": "IPv4", 00:19:47.817 "traddr": "10.0.0.2", 00:19:47.817 "trsvcid": "4420" 00:19:47.817 } 00:19:47.817 ], 00:19:47.817 "allow_any_host": true, 00:19:47.817 "hosts": [], 00:19:47.817 "serial_number": "SPDK00000000000001", 00:19:47.817 "model_number": "SPDK bdev Controller", 00:19:47.817 "max_namespaces": 32, 00:19:47.817 "min_cntlid": 1, 00:19:47.817 "max_cntlid": 65519, 00:19:47.817 "namespaces": [ 00:19:47.817 { 00:19:47.817 "nsid": 1, 00:19:47.817 "bdev_name": "Malloc0", 00:19:47.817 "name": "Malloc0", 00:19:47.817 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:47.817 "eui64": "ABCDEF0123456789", 00:19:47.817 "uuid": "f0fd9ee7-bfef-46bd-b205-6878e62c0e04" 00:19:47.817 } 00:19:47.817 ] 00:19:47.817 } 00:19:47.817 ] 00:19:47.817 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.817 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:47.817 [2024-12-05 11:03:14.775831] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:47.817 [2024-12-05 11:03:14.775891] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74068 ] 00:19:47.817 [2024-12-05 11:03:14.925332] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:19:47.817 [2024-12-05 11:03:14.925398] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:47.817 [2024-12-05 11:03:14.925404] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:47.817 [2024-12-05 11:03:14.925421] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:47.817 [2024-12-05 11:03:14.925434] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:47.817 [2024-12-05 11:03:14.925742] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:19:47.817 [2024-12-05 11:03:14.925781] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf0f750 0 00:19:47.817 [2024-12-05 11:03:14.940295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:47.817 [2024-12-05 11:03:14.940316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:47.817 [2024-12-05 11:03:14.940322] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:47.817 [2024-12-05 11:03:14.940326] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:47.817 [2024-12-05 11:03:14.940360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.940366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.940370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.817 [2024-12-05 11:03:14.940383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:47.817 [2024-12-05 11:03:14.940409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.817 [2024-12-05 11:03:14.951319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.817 [2024-12-05 11:03:14.951349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.817 [2024-12-05 11:03:14.951354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.817 [2024-12-05 11:03:14.951368] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:47.817 [2024-12-05 11:03:14.951375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:19:47.817 [2024-12-05 11:03:14.951382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:19:47.817 [2024-12-05 11:03:14.951399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.817 [2024-12-05 11:03:14.951416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.817 [2024-12-05 11:03:14.951439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.817 [2024-12-05 11:03:14.951490] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.817 [2024-12-05 11:03:14.951496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.817 [2024-12-05 11:03:14.951500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.817 [2024-12-05 11:03:14.951510] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:19:47.817 [2024-12-05 11:03:14.951517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:19:47.817 [2024-12-05 11:03:14.951523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.817 [2024-12-05 11:03:14.951537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.817 [2024-12-05 11:03:14.951551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.817 [2024-12-05 11:03:14.951591] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.817 [2024-12-05 11:03:14.951596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.817 [2024-12-05 11:03:14.951600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.817 [2024-12-05 11:03:14.951609] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:19:47.817 [2024-12-05 11:03:14.951617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:47.817 [2024-12-05 11:03:14.951623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.817 [2024-12-05 11:03:14.951637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.817 [2024-12-05 11:03:14.951649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.817 [2024-12-05 11:03:14.951681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.817 [2024-12-05 11:03:14.951686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.817 [2024-12-05 11:03:14.951690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.817 [2024-12-05 11:03:14.951699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:47.817 [2024-12-05 11:03:14.951708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951715] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.817 [2024-12-05 11:03:14.951721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.817 [2024-12-05 11:03:14.951734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.817 [2024-12-05 11:03:14.951773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.817 [2024-12-05 11:03:14.951779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.817 [2024-12-05 11:03:14.951783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.817 [2024-12-05 11:03:14.951787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.817 [2024-12-05 11:03:14.951791] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:47.817 [2024-12-05 11:03:14.951796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:47.817 [2024-12-05 11:03:14.951804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:47.817 [2024-12-05 11:03:14.951913] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:19:47.817 [2024-12-05 11:03:14.951919] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:47.817 [2024-12-05 11:03:14.951926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.951930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.951934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.951940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.818 [2024-12-05 11:03:14.951953] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.818 [2024-12-05 11:03:14.951990] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.818 [2024-12-05 11:03:14.951995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.818 [2024-12-05 11:03:14.951999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.818 [2024-12-05 11:03:14.952007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:47.818 [2024-12-05 11:03:14.952016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.952029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.818 [2024-12-05 11:03:14.952042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.818 [2024-12-05 11:03:14.952073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.818 [2024-12-05 11:03:14.952079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.818 [2024-12-05 11:03:14.952082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.818 [2024-12-05 11:03:14.952091] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:47.818 [2024-12-05 11:03:14.952096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:47.818 [2024-12-05 11:03:14.952103] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:19:47.818 [2024-12-05 11:03:14.952112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:47.818 [2024-12-05 11:03:14.952120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.952130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.818 [2024-12-05 11:03:14.952143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.818 [2024-12-05 11:03:14.952222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.818 [2024-12-05 11:03:14.952228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.818 [2024-12-05 11:03:14.952232] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952236] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0f750): datao=0, datal=4096, cccid=0 00:19:47.818 [2024-12-05 11:03:14.952241] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf73740) on tqpair(0xf0f750): expected_datao=0, payload_size=4096 00:19:47.818 [2024-12-05 11:03:14.952246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952254] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952258] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.818 [2024-12-05 11:03:14.952282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.818 [2024-12-05 11:03:14.952286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.818 [2024-12-05 11:03:14.952298] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:19:47.818 [2024-12-05 11:03:14.952304] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:19:47.818 [2024-12-05 11:03:14.952308] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:19:47.818 [2024-12-05 11:03:14.952317] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:19:47.818 [2024-12-05 11:03:14.952322] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:19:47.818 [2024-12-05 11:03:14.952327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:19:47.818 [2024-12-05 11:03:14.952335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:47.818 [2024-12-05 11:03:14.952342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.952356] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:47.818 [2024-12-05 11:03:14.952370] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.818 [2024-12-05 11:03:14.952419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.818 [2024-12-05 11:03:14.952425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.818 [2024-12-05 11:03:14.952428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.818 [2024-12-05 11:03:14.952439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.952452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.818 [2024-12-05 11:03:14.952458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.952471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.818 [2024-12-05 11:03:14.952477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.952490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.818 [2024-12-05 11:03:14.952497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.952510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.818 [2024-12-05 11:03:14.952515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:47.818 [2024-12-05 11:03:14.952523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:47.818 [2024-12-05 11:03:14.952529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.818 [2024-12-05 11:03:14.952540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0f750) 00:19:47.818 [2024-12-05 11:03:14.952546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.818 [2024-12-05 11:03:14.952567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73740, cid 0, qid 0 00:19:47.818 [2024-12-05 11:03:14.952572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf738c0, cid 1, qid 0 00:19:47.818 [2024-12-05 11:03:14.952577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73a40, cid 2, qid 0 00:19:47.818 [2024-12-05 11:03:14.952582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.818 [2024-12-05 11:03:14.952586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73d40, cid 4, qid 0 00:19:47.818 [2024-12-05 11:03:14.952650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.819 [2024-12-05 11:03:14.952655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.819 [2024-12-05 11:03:14.952659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73d40) on tqpair=0xf0f750 00:19:47.819 [2024-12-05 11:03:14.952668] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:19:47.819 [2024-12-05 11:03:14.952673] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:19:47.819 [2024-12-05 11:03:14.952682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0f750) 00:19:47.819 [2024-12-05 11:03:14.952692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.819 [2024-12-05 11:03:14.952705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73d40, cid 4, qid 0 00:19:47.819 [2024-12-05 11:03:14.952750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.819 [2024-12-05 11:03:14.952756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.819 [2024-12-05 11:03:14.952760] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952764] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0f750): datao=0, datal=4096, cccid=4 00:19:47.819 [2024-12-05 11:03:14.952768] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf73d40) on tqpair(0xf0f750): expected_datao=0, payload_size=4096 00:19:47.819 [2024-12-05 11:03:14.952773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952779] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952783] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.819 [2024-12-05 11:03:14.952796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.819 [2024-12-05 11:03:14.952799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73d40) on tqpair=0xf0f750 00:19:47.819 [2024-12-05 11:03:14.952816] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:19:47.819 [2024-12-05 11:03:14.952841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0f750) 00:19:47.819 [2024-12-05 11:03:14.952851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.819 [2024-12-05 11:03:14.952858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf0f750) 00:19:47.819 [2024-12-05 11:03:14.952871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:47.819 [2024-12-05 11:03:14.952888] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73d40, cid 4, qid 0 00:19:47.819 [2024-12-05 11:03:14.952893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73ec0, cid 5, qid 0 00:19:47.819 [2024-12-05 11:03:14.952982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.819 [2024-12-05 11:03:14.952988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.819 [2024-12-05 11:03:14.952991] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.952995] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0f750): datao=0, datal=1024, cccid=4 00:19:47.819 [2024-12-05 11:03:14.953000] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf73d40) on tqpair(0xf0f750): expected_datao=0, payload_size=1024 00:19:47.819 [2024-12-05 11:03:14.953004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953010] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953014] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.819 [2024-12-05 11:03:14.953024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.819 [2024-12-05 11:03:14.953028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73ec0) on tqpair=0xf0f750 00:19:47.819 [2024-12-05 11:03:14.953046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.819 [2024-12-05 11:03:14.953052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.819 [2024-12-05 11:03:14.953055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73d40) on tqpair=0xf0f750 00:19:47.819 [2024-12-05 11:03:14.953069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0f750) 00:19:47.819 [2024-12-05 11:03:14.953078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.819 [2024-12-05 11:03:14.953095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73d40, cid 4, qid 0 00:19:47.819 [2024-12-05 11:03:14.953146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.819 [2024-12-05 11:03:14.953151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.819 [2024-12-05 11:03:14.953155] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953159] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0f750): datao=0, datal=3072, cccid=4 00:19:47.819 [2024-12-05 11:03:14.953163] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf73d40) on tqpair(0xf0f750): expected_datao=0, payload_size=3072 00:19:47.819 [2024-12-05 11:03:14.953168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953174] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953178] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953185] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.819 [2024-12-05 11:03:14.953191] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.819 [2024-12-05 11:03:14.953194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73d40) on tqpair=0xf0f750 00:19:47.819 [2024-12-05 11:03:14.953206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.819 [2024-12-05 11:03:14.953210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf0f750) 00:19:47.819 [2024-12-05 11:03:14.953215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.819 [2024-12-05 11:03:14.953232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73d40, cid 4, qid 0 00:19:47.819 ===================================================== 00:19:47.819 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:47.819 ===================================================== 00:19:47.819 Controller Capabilities/Features 00:19:47.819 ================================ 00:19:47.819 Vendor ID: 0000 00:19:47.819 Subsystem Vendor ID: 0000 00:19:47.819 Serial Number: .................... 00:19:47.819 Model Number: ........................................ 00:19:47.819 Firmware Version: 25.01 00:19:47.819 Recommended Arb Burst: 0 00:19:47.819 IEEE OUI Identifier: 00 00 00 00:19:47.819 Multi-path I/O 00:19:47.819 May have multiple subsystem ports: No 00:19:47.819 May have multiple controllers: No 00:19:47.819 Associated with SR-IOV VF: No 00:19:47.819 Max Data Transfer Size: 131072 00:19:47.819 Max Number of Namespaces: 0 00:19:47.819 Max Number of I/O Queues: 1024 00:19:47.819 NVMe Specification Version (VS): 1.3 00:19:47.819 NVMe Specification Version (Identify): 1.3 00:19:47.819 Maximum Queue Entries: 128 00:19:47.819 Contiguous Queues Required: Yes 00:19:47.819 Arbitration Mechanisms Supported 00:19:47.819 Weighted Round Robin: Not Supported 00:19:47.819 Vendor Specific: Not Supported 00:19:47.819 Reset Timeout: 15000 ms 00:19:47.819 Doorbell Stride: 4 bytes 00:19:47.819 NVM Subsystem Reset: Not Supported 00:19:47.819 Command Sets Supported 00:19:47.819 NVM Command Set: Supported 00:19:47.819 Boot Partition: Not Supported 00:19:47.819 Memory Page Size Minimum: 4096 bytes 00:19:47.819 Memory Page Size Maximum: 4096 bytes 00:19:47.819 Persistent Memory Region: Not Supported 00:19:47.819 Optional Asynchronous Events Supported 00:19:47.820 Namespace Attribute Notices: Not Supported 00:19:47.820 Firmware Activation Notices: Not Supported 00:19:47.820 ANA Change Notices: Not Supported 00:19:47.820 PLE Aggregate Log Change Notices: Not Supported 00:19:47.820 LBA Status Info Alert Notices: Not Supported 00:19:47.820 EGE Aggregate Log Change Notices: Not Supported 00:19:47.820 Normal NVM Subsystem Shutdown event: Not Supported 00:19:47.820 Zone Descriptor Change Notices: Not Supported 00:19:47.820 Discovery Log Change Notices: Supported 00:19:47.820 Controller Attributes 00:19:47.820 128-bit Host Identifier: Not Supported 00:19:47.820 Non-Operational Permissive Mode: Not Supported 00:19:47.820 NVM Sets: Not Supported 00:19:47.820 Read Recovery Levels: Not Supported 00:19:47.820 Endurance Groups: Not Supported 00:19:47.820 Predictable Latency Mode: Not Supported 00:19:47.820 Traffic Based Keep ALive: Not Supported 00:19:47.820 Namespace Granularity: Not Supported 00:19:47.820 SQ Associations: Not Supported 00:19:47.820 UUID List: Not Supported 00:19:47.820 Multi-Domain Subsystem: Not Supported 00:19:47.820 Fixed Capacity Management: Not Supported 00:19:47.820 Variable Capacity Management: Not Supported 00:19:47.820 Delete Endurance Group: Not Supported 00:19:47.820 Delete NVM Set: Not Supported 00:19:47.820 Extended LBA Formats Supported: Not Supported 00:19:47.820 Flexible Data Placement Supported: Not Supported 00:19:47.820 00:19:47.820 Controller Memory Buffer Support 00:19:47.820 ================================ 00:19:47.820 Supported: No 00:19:47.820 00:19:47.820 Persistent Memory Region Support 00:19:47.820 ================================ 00:19:47.820 Supported: No 00:19:47.820 00:19:47.820 Admin Command Set Attributes 00:19:47.820 ============================ 00:19:47.820 Security Send/Receive: Not Supported 00:19:47.820 Format NVM: Not Supported 00:19:47.820 Firmware Activate/Download: Not Supported 00:19:47.820 Namespace Management: Not Supported 00:19:47.820 Device Self-Test: Not Supported 00:19:47.820 Directives: Not Supported 00:19:47.820 NVMe-MI: Not Supported 00:19:47.820 Virtualization Management: Not Supported 00:19:47.820 Doorbell Buffer Config: Not Supported 00:19:47.820 Get LBA Status Capability: Not Supported 00:19:47.820 Command & Feature Lockdown Capability: Not Supported 00:19:47.820 Abort Command Limit: 1 00:19:47.820 Async Event Request Limit: 4 00:19:47.820 Number of Firmware Slots: N/A 00:19:47.820 Firmware Slot 1 Read-Only: N/A 00:19:47.820 Firmware Activation Without Reset: N/A 00:19:47.820 Multiple Update Detection Support: N/A 00:19:47.820 Firmware Update Granularity: No Information Provided 00:19:47.820 Per-Namespace SMART Log: No 00:19:47.820 Asymmetric Namespace Access Log Page: Not Supported 00:19:47.820 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:47.820 Command Effects Log Page: Not Supported 00:19:47.820 Get Log Page Extended Data: Supported 00:19:47.820 Telemetry Log Pages: Not Supported 00:19:47.820 Persistent Event Log Pages: Not Supported 00:19:47.820 Supported Log Pages Log Page: May Support 00:19:47.820 Commands Supported & Effects Log Page: Not Supported 00:19:47.820 Feature Identifiers & Effects Log Page:May Support 00:19:47.820 NVMe-MI Commands & Effects Log Page: May Support 00:19:47.820 Data Area 4 for Telemetry Log: Not Supported 00:19:47.820 Error Log Page Entries Supported: 128 00:19:47.820 Keep Alive: Not Supported 00:19:47.820 00:19:47.820 NVM Command Set Attributes 00:19:47.820 ========================== 00:19:47.820 Submission Queue Entry Size 00:19:47.820 Max: 1 00:19:47.820 Min: 1 00:19:47.820 Completion Queue Entry Size 00:19:47.820 Max: 1 00:19:47.820 Min: 1 00:19:47.820 Number of Namespaces: 0 00:19:47.820 Compare Command: Not Supported 00:19:47.820 Write Uncorrectable Command: Not Supported 00:19:47.820 Dataset Management Command: Not Supported 00:19:47.820 Write Zeroes Command: Not Supported 00:19:47.820 Set Features Save Field: Not Supported 00:19:47.820 Reservations: Not Supported 00:19:47.820 Timestamp: Not Supported 00:19:47.820 Copy: Not Supported 00:19:47.820 Volatile Write Cache: Not Present 00:19:47.820 Atomic Write Unit (Normal): 1 00:19:47.820 Atomic Write Unit (PFail): 1 00:19:47.820 Atomic Compare & Write Unit: 1 00:19:47.820 Fused Compare & Write: Supported 00:19:47.820 Scatter-Gather List 00:19:47.820 SGL Command Set: Supported 00:19:47.820 SGL Keyed: Supported 00:19:47.820 SGL Bit Bucket Descriptor: Not Supported 00:19:47.820 SGL Metadata Pointer: Not Supported 00:19:47.820 Oversized SGL: Not Supported 00:19:47.820 SGL Metadata Address: Not Supported 00:19:47.820 SGL Offset: Supported 00:19:47.820 Transport SGL Data Block: Not Supported 00:19:47.820 Replay Protected Memory Block: Not Supported 00:19:47.820 00:19:47.820 Firmware Slot Information 00:19:47.820 ========================= 00:19:47.820 Active slot: 0 00:19:47.820 00:19:47.820 00:19:47.820 Error Log 00:19:47.820 ========= 00:19:47.820 00:19:47.820 Active Namespaces 00:19:47.820 ================= 00:19:47.820 Discovery Log Page 00:19:47.820 ================== 00:19:47.820 Generation Counter: 2 00:19:47.820 Number of Records: 2 00:19:47.820 Record Format: 0 00:19:47.820 00:19:47.820 Discovery Log Entry 0 00:19:47.820 ---------------------- 00:19:47.820 Transport Type: 3 (TCP) 00:19:47.820 Address Family: 1 (IPv4) 00:19:47.820 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:47.820 Entry Flags: 00:19:47.820 Duplicate Returned Information: 1 00:19:47.820 Explicit Persistent Connection Support for Discovery: 1 00:19:47.820 Transport Requirements: 00:19:47.820 Secure Channel: Not Required 00:19:47.820 Port ID: 0 (0x0000) 00:19:47.820 Controller ID: 65535 (0xffff) 00:19:47.820 Admin Max SQ Size: 128 00:19:47.820 Transport Service Identifier: 4420 00:19:47.820 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:47.820 Transport Address: 10.0.0.2 00:19:47.820 Discovery Log Entry 1 00:19:47.820 ---------------------- 00:19:47.820 Transport Type: 3 (TCP) 00:19:47.820 Address Family: 1 (IPv4) 00:19:47.820 Subsystem Type: 2 (NVM Subsystem) 00:19:47.820 Entry Flags: 00:19:47.820 Duplicate Returned Information: 0 00:19:47.820 Explicit Persistent Connection Support for Discovery: 0 00:19:47.820 Transport Requirements: 00:19:47.820 Secure Channel: Not Required 00:19:47.820 Port ID: 0 (0x0000) 00:19:47.820 Controller ID: 65535 (0xffff) 00:19:47.820 Admin Max SQ Size: 128 00:19:47.820 Transport Service Identifier: 4420 00:19:47.820 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:47.820 Transport Address: 10.0.0.2 [2024-12-05 11:03:14.953288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:47.820 [2024-12-05 11:03:14.953295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:47.820 [2024-12-05 11:03:14.953298] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:47.820 [2024-12-05 11:03:14.953302] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf0f750): datao=0, datal=8, cccid=4 00:19:47.820 [2024-12-05 11:03:14.953307] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf73d40) on tqpair(0xf0f750): expected_datao=0, payload_size=8 00:19:47.820 [2024-12-05 11:03:14.953312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.820 [2024-12-05 11:03:14.953317] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:47.820 [2024-12-05 11:03:14.953321] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:47.820 [2024-12-05 11:03:14.953333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.953339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.953343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73d40) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953428] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:19:47.821 [2024-12-05 11:03:14.953438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73740) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.821 [2024-12-05 11:03:14.953450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf738c0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.821 [2024-12-05 11:03:14.953460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73a40) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.821 [2024-12-05 11:03:14.953470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:47.821 [2024-12-05 11:03:14.953485] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953489] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.821 [2024-12-05 11:03:14.953499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.821 [2024-12-05 11:03:14.953515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.821 [2024-12-05 11:03:14.953556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.953562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.953565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.821 [2024-12-05 11:03:14.953589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.821 [2024-12-05 11:03:14.953604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.821 [2024-12-05 11:03:14.953652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.953658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.953661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953670] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:19:47.821 [2024-12-05 11:03:14.953675] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:19:47.821 [2024-12-05 11:03:14.953683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.821 [2024-12-05 11:03:14.953697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.821 [2024-12-05 11:03:14.953709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.821 [2024-12-05 11:03:14.953749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.953755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.953758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.821 [2024-12-05 11:03:14.953785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.821 [2024-12-05 11:03:14.953797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.821 [2024-12-05 11:03:14.953839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.953845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.953848] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953852] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.821 [2024-12-05 11:03:14.953874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.821 [2024-12-05 11:03:14.953886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.821 [2024-12-05 11:03:14.953934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.953940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.953944] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.953956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953960] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.953964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.821 [2024-12-05 11:03:14.953970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.821 [2024-12-05 11:03:14.953982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.821 [2024-12-05 11:03:14.954022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.954028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.954031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.954035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.954044] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.954048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.954051] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.821 [2024-12-05 11:03:14.954057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.821 [2024-12-05 11:03:14.954070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.821 [2024-12-05 11:03:14.954105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.954110] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.954114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.954118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.954126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.954130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.954134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.821 [2024-12-05 11:03:14.954139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.821 [2024-12-05 11:03:14.954152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.821 [2024-12-05 11:03:14.954192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.821 [2024-12-05 11:03:14.954198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.821 [2024-12-05 11:03:14.954202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.821 [2024-12-05 11:03:14.954206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.821 [2024-12-05 11:03:14.954214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.822 [2024-12-05 11:03:14.954240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.822 [2024-12-05 11:03:14.954288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.822 [2024-12-05 11:03:14.954294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.822 [2024-12-05 11:03:14.954298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.822 [2024-12-05 11:03:14.954310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.822 [2024-12-05 11:03:14.954337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.822 [2024-12-05 11:03:14.954374] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.822 [2024-12-05 11:03:14.954380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.822 [2024-12-05 11:03:14.954383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.822 [2024-12-05 11:03:14.954395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.822 [2024-12-05 11:03:14.954422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.822 [2024-12-05 11:03:14.954459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.822 [2024-12-05 11:03:14.954465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.822 [2024-12-05 11:03:14.954468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.822 [2024-12-05 11:03:14.954480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.822 [2024-12-05 11:03:14.954506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.822 [2024-12-05 11:03:14.954543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.822 [2024-12-05 11:03:14.954549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.822 [2024-12-05 11:03:14.954552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954556] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.822 [2024-12-05 11:03:14.954565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.822 [2024-12-05 11:03:14.954590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.822 [2024-12-05 11:03:14.954626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.822 [2024-12-05 11:03:14.954631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.822 [2024-12-05 11:03:14.954635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.822 [2024-12-05 11:03:14.954647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.822 [2024-12-05 11:03:14.954673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.822 [2024-12-05 11:03:14.954705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.822 [2024-12-05 11:03:14.954711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.822 [2024-12-05 11:03:14.954714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.822 [2024-12-05 11:03:14.954726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.822 [2024-12-05 11:03:14.954752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.822 [2024-12-05 11:03:14.954787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.822 [2024-12-05 11:03:14.954792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.822 [2024-12-05 11:03:14.954796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.822 [2024-12-05 11:03:14.954808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.822 [2024-12-05 11:03:14.954833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.822 [2024-12-05 11:03:14.954881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.822 [2024-12-05 11:03:14.954886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.822 [2024-12-05 11:03:14.954890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.822 [2024-12-05 11:03:14.954902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.822 [2024-12-05 11:03:14.954910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.822 [2024-12-05 11:03:14.954916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.823 [2024-12-05 11:03:14.954928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.823 [2024-12-05 11:03:14.954965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.823 [2024-12-05 11:03:14.954971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.823 [2024-12-05 11:03:14.954974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.954978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.823 [2024-12-05 11:03:14.955003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.823 [2024-12-05 11:03:14.955017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.823 [2024-12-05 11:03:14.955042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.823 [2024-12-05 11:03:14.955112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.823 [2024-12-05 11:03:14.955117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.823 [2024-12-05 11:03:14.955121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.823 [2024-12-05 11:03:14.955133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.823 [2024-12-05 11:03:14.955147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.823 [2024-12-05 11:03:14.955159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.823 [2024-12-05 11:03:14.955191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.823 [2024-12-05 11:03:14.955197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.823 [2024-12-05 11:03:14.955200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.823 [2024-12-05 11:03:14.955213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.823 [2024-12-05 11:03:14.955226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.823 [2024-12-05 11:03:14.955238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.823 [2024-12-05 11:03:14.955275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.823 [2024-12-05 11:03:14.955281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.823 [2024-12-05 11:03:14.955284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.823 [2024-12-05 11:03:14.955297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955301] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.955304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.823 [2024-12-05 11:03:14.955310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.823 [2024-12-05 11:03:14.955323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.823 [2024-12-05 11:03:14.959320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.823 [2024-12-05 11:03:14.959330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.823 [2024-12-05 11:03:14.959334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.959338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.823 [2024-12-05 11:03:14.959350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.959354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.959359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf0f750) 00:19:47.823 [2024-12-05 11:03:14.959366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:47.823 [2024-12-05 11:03:14.959385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf73bc0, cid 3, qid 0 00:19:47.823 [2024-12-05 11:03:14.959449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:47.823 [2024-12-05 11:03:14.959456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:47.823 [2024-12-05 11:03:14.959460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:47.823 [2024-12-05 11:03:14.959464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf73bc0) on tqpair=0xf0f750 00:19:47.823 [2024-12-05 11:03:14.959471] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:19:47.823 00:19:48.086 11:03:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:48.086 [2024-12-05 11:03:15.009644] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:48.086 [2024-12-05 11:03:15.009693] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74070 ] 00:19:48.086 [2024-12-05 11:03:15.159488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:19:48.086 [2024-12-05 11:03:15.159564] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:48.086 [2024-12-05 11:03:15.159570] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:48.086 [2024-12-05 11:03:15.159590] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:48.086 [2024-12-05 11:03:15.159604] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:48.086 [2024-12-05 11:03:15.159983] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:19:48.086 [2024-12-05 11:03:15.160035] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a27750 0 00:19:48.086 [2024-12-05 11:03:15.164384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:48.086 [2024-12-05 11:03:15.164406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:48.086 [2024-12-05 11:03:15.164412] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:48.086 [2024-12-05 11:03:15.164416] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:48.086 [2024-12-05 11:03:15.164456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.086 [2024-12-05 11:03:15.164462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.086 [2024-12-05 11:03:15.164468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.086 [2024-12-05 11:03:15.164482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:48.086 [2024-12-05 11:03:15.164512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.086 [2024-12-05 11:03:15.172330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.086 [2024-12-05 11:03:15.172347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.086 [2024-12-05 11:03:15.172352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.086 [2024-12-05 11:03:15.172357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.086 [2024-12-05 11:03:15.172369] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:48.086 [2024-12-05 11:03:15.172377] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:19:48.086 [2024-12-05 11:03:15.172384] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:19:48.086 [2024-12-05 11:03:15.172404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.086 [2024-12-05 11:03:15.172409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.086 [2024-12-05 11:03:15.172413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.087 [2024-12-05 11:03:15.172422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.087 [2024-12-05 11:03:15.172445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.087 [2024-12-05 11:03:15.172492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.087 [2024-12-05 11:03:15.172498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.087 [2024-12-05 11:03:15.172502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.087 [2024-12-05 11:03:15.172511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:19:48.087 [2024-12-05 11:03:15.172519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:19:48.087 [2024-12-05 11:03:15.172527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.087 [2024-12-05 11:03:15.172542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.087 [2024-12-05 11:03:15.172556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.087 [2024-12-05 11:03:15.172596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.087 [2024-12-05 11:03:15.172602] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.087 [2024-12-05 11:03:15.172606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.087 [2024-12-05 11:03:15.172615] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:19:48.087 [2024-12-05 11:03:15.172624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:19:48.087 [2024-12-05 11:03:15.172630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172639] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.087 [2024-12-05 11:03:15.172645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.087 [2024-12-05 11:03:15.172659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.087 [2024-12-05 11:03:15.172696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.087 [2024-12-05 11:03:15.172702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.087 [2024-12-05 11:03:15.172706] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.087 [2024-12-05 11:03:15.172716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:48.087 [2024-12-05 11:03:15.172725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.087 [2024-12-05 11:03:15.172740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.087 [2024-12-05 11:03:15.172753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.087 [2024-12-05 11:03:15.172793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.087 [2024-12-05 11:03:15.172799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.087 [2024-12-05 11:03:15.172803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.087 [2024-12-05 11:03:15.172813] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:19:48.087 [2024-12-05 11:03:15.172818] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:19:48.087 [2024-12-05 11:03:15.172826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:48.087 [2024-12-05 11:03:15.172937] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:19:48.087 [2024-12-05 11:03:15.172942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:48.087 [2024-12-05 11:03:15.172950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.172959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.087 [2024-12-05 11:03:15.172965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.087 [2024-12-05 11:03:15.172979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.087 [2024-12-05 11:03:15.173022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.087 [2024-12-05 11:03:15.173029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.087 [2024-12-05 11:03:15.173032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.173037] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.087 [2024-12-05 11:03:15.173042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:48.087 [2024-12-05 11:03:15.173051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.173055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.173059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.087 [2024-12-05 11:03:15.173065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.087 [2024-12-05 11:03:15.173079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.087 [2024-12-05 11:03:15.173116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.087 [2024-12-05 11:03:15.173122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.087 [2024-12-05 11:03:15.173126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.173130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.087 [2024-12-05 11:03:15.173135] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:48.087 [2024-12-05 11:03:15.173140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:19:48.087 [2024-12-05 11:03:15.173149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:19:48.087 [2024-12-05 11:03:15.173158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:19:48.087 [2024-12-05 11:03:15.173167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.173171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.087 [2024-12-05 11:03:15.173178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.087 [2024-12-05 11:03:15.173192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.087 [2024-12-05 11:03:15.173281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.087 [2024-12-05 11:03:15.173287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.087 [2024-12-05 11:03:15.173292] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.087 [2024-12-05 11:03:15.173296] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a27750): datao=0, datal=4096, cccid=0 00:19:48.088 [2024-12-05 11:03:15.173302] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8b740) on tqpair(0x1a27750): expected_datao=0, payload_size=4096 00:19:48.088 [2024-12-05 11:03:15.173307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173315] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173319] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.088 [2024-12-05 11:03:15.173334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.088 [2024-12-05 11:03:15.173338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.088 [2024-12-05 11:03:15.173369] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:19:48.088 [2024-12-05 11:03:15.173374] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:19:48.088 [2024-12-05 11:03:15.173379] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:19:48.088 [2024-12-05 11:03:15.173387] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:19:48.088 [2024-12-05 11:03:15.173393] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:19:48.088 [2024-12-05 11:03:15.173398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.173444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:48.088 [2024-12-05 11:03:15.173459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.088 [2024-12-05 11:03:15.173513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.088 [2024-12-05 11:03:15.173519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.088 [2024-12-05 11:03:15.173523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.088 [2024-12-05 11:03:15.173534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.173547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.088 [2024-12-05 11:03:15.173553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.173566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.088 [2024-12-05 11:03:15.173572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.173585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.088 [2024-12-05 11:03:15.173591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.173604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.088 [2024-12-05 11:03:15.173609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.173633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.088 [2024-12-05 11:03:15.173650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b740, cid 0, qid 0 00:19:48.088 [2024-12-05 11:03:15.173656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8b8c0, cid 1, qid 0 00:19:48.088 [2024-12-05 11:03:15.173660] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8ba40, cid 2, qid 0 00:19:48.088 [2024-12-05 11:03:15.173665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bbc0, cid 3, qid 0 00:19:48.088 [2024-12-05 11:03:15.173669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bd40, cid 4, qid 0 00:19:48.088 [2024-12-05 11:03:15.173739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.088 [2024-12-05 11:03:15.173745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.088 [2024-12-05 11:03:15.173748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bd40) on tqpair=0x1a27750 00:19:48.088 [2024-12-05 11:03:15.173758] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:19:48.088 [2024-12-05 11:03:15.173763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173771] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.173815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:48.088 [2024-12-05 11:03:15.173828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bd40, cid 4, qid 0 00:19:48.088 [2024-12-05 11:03:15.173871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.088 [2024-12-05 11:03:15.173877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.088 [2024-12-05 11:03:15.173881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bd40) on tqpair=0x1a27750 00:19:48.088 [2024-12-05 11:03:15.173952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.173970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.173974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.173981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.088 [2024-12-05 11:03:15.173995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bd40, cid 4, qid 0 00:19:48.088 [2024-12-05 11:03:15.174043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.088 [2024-12-05 11:03:15.174049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.088 [2024-12-05 11:03:15.174053] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.174057] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a27750): datao=0, datal=4096, cccid=4 00:19:48.088 [2024-12-05 11:03:15.174062] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8bd40) on tqpair(0x1a27750): expected_datao=0, payload_size=4096 00:19:48.088 [2024-12-05 11:03:15.174067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.174074] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.174078] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.174086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.088 [2024-12-05 11:03:15.174091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.088 [2024-12-05 11:03:15.174095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.174099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bd40) on tqpair=0x1a27750 00:19:48.088 [2024-12-05 11:03:15.174109] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:19:48.088 [2024-12-05 11:03:15.174120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.174130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:19:48.088 [2024-12-05 11:03:15.174137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.088 [2024-12-05 11:03:15.174141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a27750) 00:19:48.088 [2024-12-05 11:03:15.174147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.088 [2024-12-05 11:03:15.174161] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bd40, cid 4, qid 0 00:19:48.089 [2024-12-05 11:03:15.174224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.089 [2024-12-05 11:03:15.174230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.089 [2024-12-05 11:03:15.174233] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174237] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a27750): datao=0, datal=4096, cccid=4 00:19:48.089 [2024-12-05 11:03:15.174242] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8bd40) on tqpair(0x1a27750): expected_datao=0, payload_size=4096 00:19:48.089 [2024-12-05 11:03:15.174247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174254] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174257] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.089 [2024-12-05 11:03:15.174271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.089 [2024-12-05 11:03:15.174291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bd40) on tqpair=0x1a27750 00:19:48.089 [2024-12-05 11:03:15.174311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.089 [2024-12-05 11:03:15.174354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bd40, cid 4, qid 0 00:19:48.089 [2024-12-05 11:03:15.174398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.089 [2024-12-05 11:03:15.174404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.089 [2024-12-05 11:03:15.174408] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174412] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a27750): datao=0, datal=4096, cccid=4 00:19:48.089 [2024-12-05 11:03:15.174417] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8bd40) on tqpair(0x1a27750): expected_datao=0, payload_size=4096 00:19:48.089 [2024-12-05 11:03:15.174422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174428] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174432] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.089 [2024-12-05 11:03:15.174446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.089 [2024-12-05 11:03:15.174450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bd40) on tqpair=0x1a27750 00:19:48.089 [2024-12-05 11:03:15.174462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174503] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:19:48.089 [2024-12-05 11:03:15.174508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:19:48.089 [2024-12-05 11:03:15.174514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:19:48.089 [2024-12-05 11:03:15.174532] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.089 [2024-12-05 11:03:15.174549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.089 [2024-12-05 11:03:15.174582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bd40, cid 4, qid 0 00:19:48.089 [2024-12-05 11:03:15.174588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bec0, cid 5, qid 0 00:19:48.089 [2024-12-05 11:03:15.174642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.089 [2024-12-05 11:03:15.174648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.089 [2024-12-05 11:03:15.174652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174656] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bd40) on tqpair=0x1a27750 00:19:48.089 [2024-12-05 11:03:15.174662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.089 [2024-12-05 11:03:15.174668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.089 [2024-12-05 11:03:15.174672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bec0) on tqpair=0x1a27750 00:19:48.089 [2024-12-05 11:03:15.174685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.089 [2024-12-05 11:03:15.174709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bec0, cid 5, qid 0 00:19:48.089 [2024-12-05 11:03:15.174745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.089 [2024-12-05 11:03:15.174751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.089 [2024-12-05 11:03:15.174755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bec0) on tqpair=0x1a27750 00:19:48.089 [2024-12-05 11:03:15.174768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.089 [2024-12-05 11:03:15.174792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bec0, cid 5, qid 0 00:19:48.089 [2024-12-05 11:03:15.174832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.089 [2024-12-05 11:03:15.174838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.089 [2024-12-05 11:03:15.174842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bec0) on tqpair=0x1a27750 00:19:48.089 [2024-12-05 11:03:15.174856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.089 [2024-12-05 11:03:15.174879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bec0, cid 5, qid 0 00:19:48.089 [2024-12-05 11:03:15.174924] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.089 [2024-12-05 11:03:15.174929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.089 [2024-12-05 11:03:15.174933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bec0) on tqpair=0x1a27750 00:19:48.089 [2024-12-05 11:03:15.174954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.089 [2024-12-05 11:03:15.174971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.089 [2024-12-05 11:03:15.174989] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.174993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a27750) 00:19:48.089 [2024-12-05 11:03:15.174999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.089 [2024-12-05 11:03:15.175007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.089 [2024-12-05 11:03:15.175011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a27750) 00:19:48.090 [2024-12-05 11:03:15.175017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.090 [2024-12-05 11:03:15.175031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bec0, cid 5, qid 0 00:19:48.090 [2024-12-05 11:03:15.175037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bd40, cid 4, qid 0 00:19:48.090 [2024-12-05 11:03:15.175041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c040, cid 6, qid 0 00:19:48.090 [2024-12-05 11:03:15.175046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c1c0, cid 7, qid 0 00:19:48.090 [2024-12-05 11:03:15.175160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.090 [2024-12-05 11:03:15.175166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.090 [2024-12-05 11:03:15.175170] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175174] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a27750): datao=0, datal=8192, cccid=5 00:19:48.090 [2024-12-05 11:03:15.175179] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8bec0) on tqpair(0x1a27750): expected_datao=0, payload_size=8192 00:19:48.090 [2024-12-05 11:03:15.175184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175200] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175204] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.090 [2024-12-05 11:03:15.175215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.090 [2024-12-05 11:03:15.175219] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175223] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a27750): datao=0, datal=512, cccid=4 00:19:48.090 [2024-12-05 11:03:15.175228] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8bd40) on tqpair(0x1a27750): expected_datao=0, payload_size=512 00:19:48.090 [2024-12-05 11:03:15.175233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175239] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175243] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.090 [2024-12-05 11:03:15.175254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.090 [2024-12-05 11:03:15.175258] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175262] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a27750): datao=0, datal=512, cccid=6 00:19:48.090 [2024-12-05 11:03:15.175267] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8c040) on tqpair(0x1a27750): expected_datao=0, payload_size=512 00:19:48.090 [2024-12-05 11:03:15.175281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175292] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:48.090 [2024-12-05 11:03:15.175303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:48.090 [2024-12-05 11:03:15.175307] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175311] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a27750): datao=0, datal=4096, cccid=7 00:19:48.090 [2024-12-05 11:03:15.175316] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a8c1c0) on tqpair(0x1a27750): expected_datao=0, payload_size=4096 00:19:48.090 [2024-12-05 11:03:15.175321] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175327] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175331] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.090 [2024-12-05 11:03:15.175343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.090 [2024-12-05 11:03:15.175346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.090 [2024-12-05 11:03:15.175351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bec0) on tqpair=0x1a27750 00:19:48.090 ===================================================== 00:19:48.090 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:48.090 ===================================================== 00:19:48.090 Controller Capabilities/Features 00:19:48.090 ================================ 00:19:48.090 Vendor ID: 8086 00:19:48.090 Subsystem Vendor ID: 8086 00:19:48.090 Serial Number: SPDK00000000000001 00:19:48.090 Model Number: SPDK bdev Controller 00:19:48.090 Firmware Version: 25.01 00:19:48.090 Recommended Arb Burst: 6 00:19:48.090 IEEE OUI Identifier: e4 d2 5c 00:19:48.090 Multi-path I/O 00:19:48.090 May have multiple subsystem ports: Yes 00:19:48.090 May have multiple controllers: Yes 00:19:48.090 Associated with SR-IOV VF: No 00:19:48.090 Max Data Transfer Size: 131072 00:19:48.090 Max Number of Namespaces: 32 00:19:48.090 Max Number of I/O Queues: 127 00:19:48.090 NVMe Specification Version (VS): 1.3 00:19:48.090 NVMe Specification Version (Identify): 1.3 00:19:48.090 Maximum Queue Entries: 128 00:19:48.090 Contiguous Queues Required: Yes 00:19:48.090 Arbitration Mechanisms Supported 00:19:48.090 Weighted Round Robin: Not Supported 00:19:48.090 Vendor Specific: Not Supported 00:19:48.090 Reset Timeout: 15000 ms 00:19:48.090 Doorbell Stride: 4 bytes 00:19:48.090 NVM Subsystem Reset: Not Supported 00:19:48.090 Command Sets Supported 00:19:48.090 NVM Command Set: Supported 00:19:48.090 Boot Partition: Not Supported 00:19:48.090 Memory Page Size Minimum: 4096 bytes 00:19:48.090 Memory Page Size Maximum: 4096 bytes 00:19:48.090 Persistent Memory Region: Not Supported 00:19:48.090 Optional Asynchronous Events Supported 00:19:48.090 Namespace Attribute Notices: Supported 00:19:48.090 Firmware Activation Notices: Not Supported 00:19:48.090 ANA Change Notices: Not Supported 00:19:48.090 PLE Aggregate Log Change Notices: Not Supported 00:19:48.090 LBA Status Info Alert Notices: Not Supported 00:19:48.090 EGE Aggregate Log Change Notices: Not Supported 00:19:48.090 Normal NVM Subsystem Shutdown event: Not Supported 00:19:48.090 Zone Descriptor Change Notices: Not Supported 00:19:48.090 Discovery Log Change Notices: Not Supported 00:19:48.090 Controller Attributes 00:19:48.090 128-bit Host Identifier: Supported 00:19:48.090 Non-Operational Permissive Mode: Not Supported 00:19:48.090 NVM Sets: Not Supported 00:19:48.090 Read Recovery Levels: Not Supported 00:19:48.090 Endurance Groups: Not Supported 00:19:48.090 Predictable Latency Mode: Not Supported 00:19:48.090 Traffic Based Keep ALive: Not Supported 00:19:48.090 Namespace Granularity: Not Supported 00:19:48.090 SQ Associations: Not Supported 00:19:48.090 UUID List: Not Supported 00:19:48.090 Multi-Domain Subsystem: Not Supported 00:19:48.090 Fixed Capacity Management: Not Supported 00:19:48.090 Variable Capacity Management: Not Supported 00:19:48.090 Delete Endurance Group: Not Supported 00:19:48.090 Delete NVM Set: Not Supported 00:19:48.090 Extended LBA Formats Supported: Not Supported 00:19:48.090 Flexible Data Placement Supported: Not Supported 00:19:48.090 00:19:48.090 Controller Memory Buffer Support 00:19:48.090 ================================ 00:19:48.090 Supported: No 00:19:48.090 00:19:48.090 Persistent Memory Region Support 00:19:48.090 ================================ 00:19:48.090 Supported: No 00:19:48.090 00:19:48.090 Admin Command Set Attributes 00:19:48.090 ============================ 00:19:48.090 Security Send/Receive: Not Supported 00:19:48.090 Format NVM: Not Supported 00:19:48.090 Firmware Activate/Download: Not Supported 00:19:48.090 Namespace Management: Not Supported 00:19:48.090 Device Self-Test: Not Supported 00:19:48.090 Directives: Not Supported 00:19:48.090 NVMe-MI: Not Supported 00:19:48.090 Virtualization Management: Not Supported 00:19:48.090 Doorbell Buffer Config: Not Supported 00:19:48.090 Get LBA Status Capability: Not Supported 00:19:48.090 Command & Feature Lockdown Capability: Not Supported 00:19:48.090 Abort Command Limit: 4 00:19:48.090 Async Event Request Limit: 4 00:19:48.090 Number of Firmware Slots: N/A 00:19:48.090 Firmware Slot 1 Read-Only: N/A 00:19:48.090 Firmware Activation Without Reset: N/A 00:19:48.090 Multiple Update Detection Support: N/A 00:19:48.090 Firmware Update Granularity: No Information Provided 00:19:48.090 Per-Namespace SMART Log: No 00:19:48.090 Asymmetric Namespace Access Log Page: Not Supported 00:19:48.090 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:48.090 Command Effects Log Page: Supported 00:19:48.090 Get Log Page Extended Data: Supported 00:19:48.090 Telemetry Log Pages: Not Supported 00:19:48.090 Persistent Event Log Pages: Not Supported 00:19:48.090 Supported Log Pages Log Page: May Support 00:19:48.090 Commands Supported & Effects Log Page: Not Supported 00:19:48.090 Feature Identifiers & Effects Log Page:May Support 00:19:48.090 NVMe-MI Commands & Effects Log Page: May Support 00:19:48.090 Data Area 4 for Telemetry Log: Not Supported 00:19:48.090 Error Log Page Entries Supported: 128 00:19:48.090 Keep Alive: Supported 00:19:48.090 Keep Alive Granularity: 10000 ms 00:19:48.090 00:19:48.090 NVM Command Set Attributes 00:19:48.090 ========================== 00:19:48.090 Submission Queue Entry Size 00:19:48.090 Max: 64 00:19:48.090 Min: 64 00:19:48.090 Completion Queue Entry Size 00:19:48.090 Max: 16 00:19:48.090 Min: 16 00:19:48.090 Number of Namespaces: 32 00:19:48.090 Compare Command: Supported 00:19:48.090 Write Uncorrectable Command: Not Supported 00:19:48.090 Dataset Management Command: Supported 00:19:48.090 Write Zeroes Command: Supported 00:19:48.090 Set Features Save Field: Not Supported 00:19:48.090 Reservations: Supported 00:19:48.090 Timestamp: Not Supported 00:19:48.090 Copy: Supported 00:19:48.090 Volatile Write Cache: Present 00:19:48.090 Atomic Write Unit (Normal): 1 00:19:48.090 Atomic Write Unit (PFail): 1 00:19:48.090 Atomic Compare & Write Unit: 1 00:19:48.090 Fused Compare & Write: Supported 00:19:48.090 Scatter-Gather List 00:19:48.090 SGL Command Set: Supported 00:19:48.090 SGL Keyed: Supported 00:19:48.090 SGL Bit Bucket Descriptor: Not Supported 00:19:48.090 SGL Metadata Pointer: Not Supported 00:19:48.090 Oversized SGL: Not Supported 00:19:48.090 SGL Metadata Address: Not Supported 00:19:48.090 SGL Offset: Supported 00:19:48.090 Transport SGL Data Block: Not Supported 00:19:48.090 Replay Protected Memory Block: Not Supported 00:19:48.090 00:19:48.090 Firmware Slot Information 00:19:48.090 ========================= 00:19:48.090 Active slot: 1 00:19:48.090 Slot 1 Firmware Revision: 25.01 00:19:48.090 00:19:48.090 00:19:48.090 Commands Supported and Effects 00:19:48.090 ============================== 00:19:48.090 Admin Commands 00:19:48.090 -------------- 00:19:48.090 Get Log Page (02h): Supported 00:19:48.091 Identify (06h): Supported 00:19:48.091 Abort (08h): Supported 00:19:48.091 Set Features (09h): Supported 00:19:48.091 Get Features (0Ah): Supported 00:19:48.091 Asynchronous Event Request (0Ch): Supported 00:19:48.091 Keep Alive (18h): Supported 00:19:48.091 I/O Commands 00:19:48.091 ------------ 00:19:48.091 Flush (00h): Supported LBA-Change 00:19:48.091 Write (01h): Supported LBA-Change 00:19:48.091 Read (02h): Supported 00:19:48.091 Compare (05h): Supported 00:19:48.091 Write Zeroes (08h): Supported LBA-Change 00:19:48.091 Dataset Management (09h): Supported LBA-Change 00:19:48.091 Copy (19h): Supported LBA-Change 00:19:48.091 00:19:48.091 Error Log 00:19:48.091 ========= 00:19:48.091 00:19:48.091 Arbitration 00:19:48.091 =========== 00:19:48.091 Arbitration Burst: 1 00:19:48.091 00:19:48.091 Power Management 00:19:48.091 ================ 00:19:48.091 Number of Power States: 1 00:19:48.091 Current Power State: Power State #0 00:19:48.091 Power State #0: 00:19:48.091 Max Power: 0.00 W 00:19:48.091 Non-Operational State: Operational 00:19:48.091 Entry Latency: Not Reported 00:19:48.091 Exit Latency: Not Reported 00:19:48.091 Relative Read Throughput: 0 00:19:48.091 Relative Read Latency: 0 00:19:48.091 Relative Write Throughput: 0 00:19:48.091 Relative Write Latency: 0 00:19:48.091 Idle Power: Not Reported 00:19:48.091 Active Power: Not Reported 00:19:48.091 Non-Operational Permissive Mode: Not Supported 00:19:48.091 00:19:48.091 Health Information 00:19:48.091 ================== 00:19:48.091 Critical Warnings: 00:19:48.091 Available Spare Space: OK 00:19:48.091 Temperature: OK 00:19:48.091 Device Reliability: OK 00:19:48.091 Read Only: No 00:19:48.091 Volatile Memory Backup: OK 00:19:48.091 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:48.091 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:48.091 Available Spare: 0% 00:19:48.091 Available Spare Threshold: 0% 00:19:48.091 Life Percentage Used:[2024-12-05 11:03:15.175365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.175371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.175375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bd40) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.175398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.175402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c040) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.175419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.175423] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c1c0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a27750) 00:19:48.091 [2024-12-05 11:03:15.175533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.091 [2024-12-05 11:03:15.175549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8c1c0, cid 7, qid 0 00:19:48.091 [2024-12-05 11:03:15.175590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.175596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.175600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8c1c0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175638] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:19:48.091 [2024-12-05 11:03:15.175647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b740) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.091 [2024-12-05 11:03:15.175660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8b8c0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.091 [2024-12-05 11:03:15.175670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8ba40) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.091 [2024-12-05 11:03:15.175680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bbc0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:48.091 [2024-12-05 11:03:15.175693] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175702] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a27750) 00:19:48.091 [2024-12-05 11:03:15.175708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.091 [2024-12-05 11:03:15.175724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bbc0, cid 3, qid 0 00:19:48.091 [2024-12-05 11:03:15.175770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.175776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.175780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bbc0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a27750) 00:19:48.091 [2024-12-05 11:03:15.175805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.091 [2024-12-05 11:03:15.175820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bbc0, cid 3, qid 0 00:19:48.091 [2024-12-05 11:03:15.175877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.175883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.175887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175891] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bbc0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.175896] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:19:48.091 [2024-12-05 11:03:15.175901] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:19:48.091 [2024-12-05 11:03:15.175910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175914] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a27750) 00:19:48.091 [2024-12-05 11:03:15.175925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.091 [2024-12-05 11:03:15.175938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bbc0, cid 3, qid 0 00:19:48.091 [2024-12-05 11:03:15.175983] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.175989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.175993] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.175997] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bbc0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.176006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a27750) 00:19:48.091 [2024-12-05 11:03:15.176021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.091 [2024-12-05 11:03:15.176034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bbc0, cid 3, qid 0 00:19:48.091 [2024-12-05 11:03:15.176073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.176079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.176083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bbc0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.176096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a27750) 00:19:48.091 [2024-12-05 11:03:15.176111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.091 [2024-12-05 11:03:15.176123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bbc0, cid 3, qid 0 00:19:48.091 [2024-12-05 11:03:15.176160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.176166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.176170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bbc0) on tqpair=0x1a27750 00:19:48.091 [2024-12-05 11:03:15.176183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a27750) 00:19:48.091 [2024-12-05 11:03:15.176197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.091 [2024-12-05 11:03:15.176210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bbc0, cid 3, qid 0 00:19:48.091 [2024-12-05 11:03:15.176247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.091 [2024-12-05 11:03:15.176253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.091 [2024-12-05 11:03:15.176257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.091 [2024-12-05 11:03:15.176261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bbc0) on tqpair=0x1a27750 00:19:48.092 [2024-12-05 11:03:15.176270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:48.092 [2024-12-05 11:03:15.183294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:48.092 [2024-12-05 11:03:15.183301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a27750) 00:19:48.092 [2024-12-05 11:03:15.183309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:48.092 [2024-12-05 11:03:15.183330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a8bbc0, cid 3, qid 0 00:19:48.092 [2024-12-05 11:03:15.183377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:48.092 [2024-12-05 11:03:15.183384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:48.092 [2024-12-05 11:03:15.183388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:48.092 [2024-12-05 11:03:15.183392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a8bbc0) on tqpair=0x1a27750 00:19:48.092 [2024-12-05 11:03:15.183400] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:19:48.092 0% 00:19:48.092 Data Units Read: 0 00:19:48.092 Data Units Written: 0 00:19:48.092 Host Read Commands: 0 00:19:48.092 Host Write Commands: 0 00:19:48.092 Controller Busy Time: 0 minutes 00:19:48.092 Power Cycles: 0 00:19:48.092 Power On Hours: 0 hours 00:19:48.092 Unsafe Shutdowns: 0 00:19:48.092 Unrecoverable Media Errors: 0 00:19:48.092 Lifetime Error Log Entries: 0 00:19:48.092 Warning Temperature Time: 0 minutes 00:19:48.092 Critical Temperature Time: 0 minutes 00:19:48.092 00:19:48.092 Number of Queues 00:19:48.092 ================ 00:19:48.092 Number of I/O Submission Queues: 127 00:19:48.092 Number of I/O Completion Queues: 127 00:19:48.092 00:19:48.092 Active Namespaces 00:19:48.092 ================= 00:19:48.092 Namespace ID:1 00:19:48.092 Error Recovery Timeout: Unlimited 00:19:48.092 Command Set Identifier: NVM (00h) 00:19:48.092 Deallocate: Supported 00:19:48.092 Deallocated/Unwritten Error: Not Supported 00:19:48.092 Deallocated Read Value: Unknown 00:19:48.092 Deallocate in Write Zeroes: Not Supported 00:19:48.092 Deallocated Guard Field: 0xFFFF 00:19:48.092 Flush: Supported 00:19:48.092 Reservation: Supported 00:19:48.092 Namespace Sharing Capabilities: Multiple Controllers 00:19:48.092 Size (in LBAs): 131072 (0GiB) 00:19:48.092 Capacity (in LBAs): 131072 (0GiB) 00:19:48.092 Utilization (in LBAs): 131072 (0GiB) 00:19:48.092 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:48.092 EUI64: ABCDEF0123456789 00:19:48.092 UUID: f0fd9ee7-bfef-46bd-b205-6878e62c0e04 00:19:48.092 Thin Provisioning: Not Supported 00:19:48.092 Per-NS Atomic Units: Yes 00:19:48.092 Atomic Boundary Size (Normal): 0 00:19:48.092 Atomic Boundary Size (PFail): 0 00:19:48.092 Atomic Boundary Offset: 0 00:19:48.092 Maximum Single Source Range Length: 65535 00:19:48.092 Maximum Copy Length: 65535 00:19:48.092 Maximum Source Range Count: 1 00:19:48.092 NGUID/EUI64 Never Reused: No 00:19:48.092 Namespace Write Protected: No 00:19:48.092 Number of LBA Formats: 1 00:19:48.092 Current LBA Format: LBA Format #00 00:19:48.092 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:48.092 00:19:48.092 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:48.350 rmmod nvme_tcp 00:19:48.350 rmmod nvme_fabrics 00:19:48.350 rmmod nvme_keyring 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 74033 ']' 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 74033 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74033 ']' 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74033 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74033 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74033' 00:19:48.350 killing process with pid 74033 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74033 00:19:48.350 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74033 00:19:48.609 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:48.609 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:19:48.609 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@254 -- # local dev 00:19:48.609 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:48.609 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:48.609 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:48.609 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:48.609 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:48.610 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # continue 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # continue 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@274 -- # iptr 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-save 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-restore 00:19:48.869 00:19:48.869 real 0m3.107s 00:19:48.869 user 0m7.241s 00:19:48.869 sys 0m1.025s 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:19:48.869 ************************************ 00:19:48.869 END TEST nvmf_identify 00:19:48.869 ************************************ 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.869 ************************************ 00:19:48.869 START TEST nvmf_perf 00:19:48.869 ************************************ 00:19:48.869 11:03:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:48.869 * Looking for test storage... 00:19:49.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:49.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.131 --rc genhtml_branch_coverage=1 00:19:49.131 --rc genhtml_function_coverage=1 00:19:49.131 --rc genhtml_legend=1 00:19:49.131 --rc geninfo_all_blocks=1 00:19:49.131 --rc geninfo_unexecuted_blocks=1 00:19:49.131 00:19:49.131 ' 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:49.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.131 --rc genhtml_branch_coverage=1 00:19:49.131 --rc genhtml_function_coverage=1 00:19:49.131 --rc genhtml_legend=1 00:19:49.131 --rc geninfo_all_blocks=1 00:19:49.131 --rc geninfo_unexecuted_blocks=1 00:19:49.131 00:19:49.131 ' 00:19:49.131 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.132 --rc genhtml_branch_coverage=1 00:19:49.132 --rc genhtml_function_coverage=1 00:19:49.132 --rc genhtml_legend=1 00:19:49.132 --rc geninfo_all_blocks=1 00:19:49.132 --rc geninfo_unexecuted_blocks=1 00:19:49.132 00:19:49.132 ' 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:49.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.132 --rc genhtml_branch_coverage=1 00:19:49.132 --rc genhtml_function_coverage=1 00:19:49.132 --rc genhtml_legend=1 00:19:49.132 --rc geninfo_all_blocks=1 00:19:49.132 --rc geninfo_unexecuted_blocks=1 00:19:49.132 00:19:49.132 ' 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:49.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@223 -- # create_target_ns 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.132 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # return 0 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up target0 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:49.133 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:49.415 10.0.0.1 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:49.415 10.0.0.2 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:49.415 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up target1 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772163 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:49.416 10.0.0.3 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772164 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:49.416 10.0.0.4 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:49.416 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:49.417 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator0 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:49.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:19:49.677 00:19:49.677 --- 10.0.0.1 ping statistics --- 00:19:49.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.677 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target0 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target0 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:49.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:49.677 00:19:49.677 --- 10.0.0.2 ping statistics --- 00:19:49.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.677 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:49.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:19:49.677 00:19:49.677 --- 10.0.0.3 ping statistics --- 00:19:49.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.677 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:49.677 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:49.677 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:49.677 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:19:49.677 00:19:49.677 --- 10.0.0.4 ping statistics --- 00:19:49.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.677 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # return 0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target0 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target1 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=74296 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 74296 00:19:49.678 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74296 ']' 00:19:49.679 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.679 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.938 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.938 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.938 11:03:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:49.938 [2024-12-05 11:03:16.890698] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:19:49.938 [2024-12-05 11:03:16.890777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.938 [2024-12-05 11:03:17.044973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.196 [2024-12-05 11:03:17.100558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.196 [2024-12-05 11:03:17.100605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.196 [2024-12-05 11:03:17.100616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.196 [2024-12-05 11:03:17.100626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.196 [2024-12-05 11:03:17.100633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.196 [2024-12-05 11:03:17.101546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.196 [2024-12-05 11:03:17.101675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.196 [2024-12-05 11:03:17.101776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.196 [2024-12-05 11:03:17.101781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.196 [2024-12-05 11:03:17.143788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.761 11:03:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.761 11:03:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:19:50.761 11:03:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:50.761 11:03:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.761 11:03:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:50.761 11:03:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.761 11:03:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:50.761 11:03:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:19:51.327 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:19:51.327 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:19:51.327 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:19:51.327 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:51.893 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:19:51.893 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:19:51.893 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:19:51.893 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:19:51.893 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:51.893 [2024-12-05 11:03:18.948076] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.893 11:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:52.152 11:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:52.152 11:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:52.411 11:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:19:52.411 11:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:19:52.671 11:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:52.930 [2024-12-05 11:03:19.851811] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.930 11:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:52.930 11:03:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:52.930 11:03:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:52.930 11:03:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:19:52.930 11:03:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:54.309 Initializing NVMe Controllers 00:19:54.309 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:54.309 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:54.309 Initialization complete. Launching workers. 00:19:54.309 ======================================================== 00:19:54.309 Latency(us) 00:19:54.309 Device Information : IOPS MiB/s Average min max 00:19:54.309 PCIE (0000:00:10.0) NSID 1 from core 0: 19232.00 75.12 1664.21 470.90 7144.80 00:19:54.309 ======================================================== 00:19:54.309 Total : 19232.00 75.12 1664.21 470.90 7144.80 00:19:54.309 00:19:54.309 11:03:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:55.735 Initializing NVMe Controllers 00:19:55.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:55.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:55.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:55.735 Initialization complete. Launching workers. 00:19:55.735 ======================================================== 00:19:55.735 Latency(us) 00:19:55.735 Device Information : IOPS MiB/s Average min max 00:19:55.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4765.69 18.62 209.60 76.70 4264.53 00:19:55.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.55 0.49 8092.56 7008.80 12036.67 00:19:55.735 ======================================================== 00:19:55.735 Total : 4890.23 19.10 410.37 76.70 12036.67 00:19:55.735 00:19:55.735 11:03:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:57.109 Initializing NVMe Controllers 00:19:57.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:57.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:57.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:57.109 Initialization complete. Launching workers. 00:19:57.109 ======================================================== 00:19:57.109 Latency(us) 00:19:57.109 Device Information : IOPS MiB/s Average min max 00:19:57.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11060.12 43.20 2895.48 422.41 8639.20 00:19:57.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3840.31 15.00 8369.20 5619.89 16851.63 00:19:57.109 ======================================================== 00:19:57.109 Total : 14900.43 58.20 4306.23 422.41 16851.63 00:19:57.109 00:19:57.109 11:03:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:19:57.109 11:03:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:59.650 Initializing NVMe Controllers 00:19:59.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:59.650 Controller IO queue size 128, less than required. 00:19:59.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:59.650 Controller IO queue size 128, less than required. 00:19:59.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:59.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:59.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:59.650 Initialization complete. Launching workers. 00:19:59.650 ======================================================== 00:19:59.650 Latency(us) 00:19:59.650 Device Information : IOPS MiB/s Average min max 00:19:59.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2135.78 533.95 60743.58 32506.92 93033.73 00:19:59.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 701.60 175.40 188344.65 45948.52 287268.79 00:19:59.650 ======================================================== 00:19:59.650 Total : 2837.38 709.35 92295.52 32506.92 287268.79 00:19:59.650 00:19:59.650 11:03:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:19:59.650 Initializing NVMe Controllers 00:19:59.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:59.650 Controller IO queue size 128, less than required. 00:19:59.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:59.650 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:19:59.650 Controller IO queue size 128, less than required. 00:19:59.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:59.650 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:19:59.650 WARNING: Some requested NVMe devices were skipped 00:19:59.650 No valid NVMe controllers or AIO or URING devices found 00:19:59.650 11:03:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:02.181 Initializing NVMe Controllers 00:20:02.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:02.181 Controller IO queue size 128, less than required. 00:20:02.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:02.181 Controller IO queue size 128, less than required. 00:20:02.181 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:02.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:02.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:02.181 Initialization complete. Launching workers. 00:20:02.182 00:20:02.182 ==================== 00:20:02.182 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:02.182 TCP transport: 00:20:02.182 polls: 10832 00:20:02.182 idle_polls: 5986 00:20:02.182 sock_completions: 4846 00:20:02.182 nvme_completions: 7849 00:20:02.182 submitted_requests: 11768 00:20:02.182 queued_requests: 1 00:20:02.182 00:20:02.182 ==================== 00:20:02.182 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:02.182 TCP transport: 00:20:02.182 polls: 10967 00:20:02.182 idle_polls: 5553 00:20:02.182 sock_completions: 5414 00:20:02.182 nvme_completions: 8239 00:20:02.182 submitted_requests: 12330 00:20:02.182 queued_requests: 1 00:20:02.182 ======================================================== 00:20:02.182 Latency(us) 00:20:02.182 Device Information : IOPS MiB/s Average min max 00:20:02.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1961.96 490.49 65984.95 33912.94 99487.43 00:20:02.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2059.45 514.86 62529.89 30537.63 113482.61 00:20:02.182 ======================================================== 00:20:02.182 Total : 4021.41 1005.35 64215.54 30537.63 113482.61 00:20:02.182 00:20:02.182 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:02.514 rmmod nvme_tcp 00:20:02.514 rmmod nvme_fabrics 00:20:02.514 rmmod nvme_keyring 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 74296 ']' 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 74296 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74296 ']' 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74296 00:20:02.514 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:20:02.771 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.771 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74296 00:20:02.771 killing process with pid 74296 00:20:02.771 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.771 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.771 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74296' 00:20:02.771 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74296 00:20:02.771 11:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74296 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@254 -- # local dev 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # continue 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # continue 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@274 -- # iptr 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-save 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:03.708 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-restore 00:20:03.967 ************************************ 00:20:03.967 END TEST nvmf_perf 00:20:03.967 ************************************ 00:20:03.967 00:20:03.967 real 0m14.978s 00:20:03.967 user 0m52.841s 00:20:03.967 sys 0m4.710s 00:20:03.967 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.967 11:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 11:03:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:03.967 11:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:03.967 11:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.967 11:03:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.967 ************************************ 00:20:03.967 START TEST nvmf_fio_host 00:20:03.967 ************************************ 00:20:03.967 11:03:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:03.967 * Looking for test storage... 00:20:03.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:03.967 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:03.967 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:20:03.967 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:04.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.228 --rc genhtml_branch_coverage=1 00:20:04.228 --rc genhtml_function_coverage=1 00:20:04.228 --rc genhtml_legend=1 00:20:04.228 --rc geninfo_all_blocks=1 00:20:04.228 --rc geninfo_unexecuted_blocks=1 00:20:04.228 00:20:04.228 ' 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:04.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.228 --rc genhtml_branch_coverage=1 00:20:04.228 --rc genhtml_function_coverage=1 00:20:04.228 --rc genhtml_legend=1 00:20:04.228 --rc geninfo_all_blocks=1 00:20:04.228 --rc geninfo_unexecuted_blocks=1 00:20:04.228 00:20:04.228 ' 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:04.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.228 --rc genhtml_branch_coverage=1 00:20:04.228 --rc genhtml_function_coverage=1 00:20:04.228 --rc genhtml_legend=1 00:20:04.228 --rc geninfo_all_blocks=1 00:20:04.228 --rc geninfo_unexecuted_blocks=1 00:20:04.228 00:20:04.228 ' 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:04.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.228 --rc genhtml_branch_coverage=1 00:20:04.228 --rc genhtml_function_coverage=1 00:20:04.228 --rc genhtml_legend=1 00:20:04.228 --rc geninfo_all_blocks=1 00:20:04.228 --rc geninfo_unexecuted_blocks=1 00:20:04.228 00:20:04.228 ' 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:20:04.228 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:04.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@223 -- # create_target_ns 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # return 0 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:04.229 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up target0 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:04.230 10.0.0.1 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:20:04.230 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:04.490 10.0.0.2 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:04.490 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up target1 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772163 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:04.491 10.0.0.3 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772164 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:04.491 10.0.0.4 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:04.491 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator0 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.750 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:04.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:20:04.751 00:20:04.751 --- 10.0.0.1 ping statistics --- 00:20:04.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.751 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target0 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target0 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:04.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:04.751 00:20:04.751 --- 10.0.0.2 ping statistics --- 00:20:04.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.751 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:04.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:20:04.751 00:20:04.751 --- 10.0.0.3 ping statistics --- 00:20:04.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.751 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:04.751 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:04.751 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:20:04.751 00:20:04.751 --- 10.0.0.4 ping statistics --- 00:20:04.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.751 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # return 0 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:04.751 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator0 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target0 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target0 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target1 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:04.752 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74762 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74762 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74762 ']' 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.011 11:03:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.011 [2024-12-05 11:03:32.009745] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:20:05.011 [2024-12-05 11:03:32.009964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.011 [2024-12-05 11:03:32.163617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.271 [2024-12-05 11:03:32.212407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.271 [2024-12-05 11:03:32.212631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.271 [2024-12-05 11:03:32.212760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.271 [2024-12-05 11:03:32.212770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.271 [2024-12-05 11:03:32.212777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.271 [2024-12-05 11:03:32.213718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.271 [2024-12-05 11:03:32.213799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.271 [2024-12-05 11:03:32.214829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.271 [2024-12-05 11:03:32.214833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.271 [2024-12-05 11:03:32.256896] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.838 11:03:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.838 11:03:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:20:05.838 11:03:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:06.097 [2024-12-05 11:03:33.055408] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.097 11:03:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:06.097 11:03:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.097 11:03:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.097 11:03:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:06.356 Malloc1 00:20:06.356 11:03:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:06.686 11:03:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:06.686 11:03:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.949 [2024-12-05 11:03:34.011773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.949 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:07.209 11:03:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:07.469 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:07.469 fio-3.35 00:20:07.469 Starting 1 thread 00:20:10.013 00:20:10.013 test: (groupid=0, jobs=1): err= 0: pid=74840: Thu Dec 5 11:03:36 2024 00:20:10.013 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.2MiB/2005msec) 00:20:10.013 slat (nsec): min=1550, max=372238, avg=1713.33, stdev=3161.52 00:20:10.013 clat (usec): min=2929, max=9741, avg=5759.35, stdev=418.66 00:20:10.013 lat (usec): min=2976, max=9743, avg=5761.07, stdev=418.68 00:20:10.013 clat percentiles (usec): 00:20:10.013 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5473], 00:20:10.013 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5800], 00:20:10.013 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6194], 95.00th=[ 6390], 00:20:10.013 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 8717], 99.95th=[ 9372], 00:20:10.013 | 99.99th=[ 9765] 00:20:10.013 bw ( KiB/s): min=45708, max=47160, per=99.92%, avg=46547.00, stdev=627.71, samples=4 00:20:10.013 iops : min=11427, max=11790, avg=11636.75, stdev=156.93, samples=4 00:20:10.013 write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(90.6MiB/2005msec); 0 zone resets 00:20:10.013 slat (nsec): min=1593, max=266055, avg=1768.39, stdev=1967.16 00:20:10.013 clat (usec): min=2779, max=9420, avg=5228.16, stdev=373.53 00:20:10.013 lat (usec): min=2795, max=9422, avg=5229.93, stdev=373.65 00:20:10.013 clat percentiles (usec): 00:20:10.013 | 1.00th=[ 4359], 5.00th=[ 4752], 10.00th=[ 4817], 20.00th=[ 4948], 00:20:10.013 | 30.00th=[ 5080], 40.00th=[ 5145], 50.00th=[ 5211], 60.00th=[ 5276], 00:20:10.013 | 70.00th=[ 5407], 80.00th=[ 5473], 90.00th=[ 5604], 95.00th=[ 5735], 00:20:10.013 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7898], 99.95th=[ 8717], 00:20:10.013 | 99.99th=[ 9110] 00:20:10.013 bw ( KiB/s): min=45928, max=46592, per=99.93%, avg=46228.75, stdev=280.15, samples=4 00:20:10.013 iops : min=11482, max=11648, avg=11557.00, stdev=70.13, samples=4 00:20:10.013 lat (msec) : 4=0.45%, 10=99.55% 00:20:10.013 cpu : usr=70.56%, sys=23.60%, ctx=5, majf=0, minf=7 00:20:10.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:10.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:10.013 issued rwts: total=23351,23187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:10.013 00:20:10.013 Run status group 0 (all jobs): 00:20:10.013 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.2MiB (95.6MB), run=2005-2005msec 00:20:10.013 WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=90.6MiB (95.0MB), run=2005-2005msec 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:10.013 11:03:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:10.013 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:10.013 fio-3.35 00:20:10.013 Starting 1 thread 00:20:12.545 00:20:12.545 test: (groupid=0, jobs=1): err= 0: pid=74885: Thu Dec 5 11:03:39 2024 00:20:12.545 read: IOPS=9875, BW=154MiB/s (162MB/s)(310MiB/2007msec) 00:20:12.545 slat (usec): min=2, max=104, avg= 2.78, stdev= 1.51 00:20:12.545 clat (usec): min=1670, max=18535, avg=7389.00, stdev=2242.63 00:20:12.545 lat (usec): min=1673, max=18537, avg=7391.78, stdev=2242.72 00:20:12.545 clat percentiles (usec): 00:20:12.545 | 1.00th=[ 3163], 5.00th=[ 3851], 10.00th=[ 4424], 20.00th=[ 5407], 00:20:12.545 | 30.00th=[ 6128], 40.00th=[ 6718], 50.00th=[ 7308], 60.00th=[ 7898], 00:20:12.545 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10290], 95.00th=[11207], 00:20:12.545 | 99.00th=[12911], 99.50th=[13829], 99.90th=[16319], 99.95th=[17695], 00:20:12.545 | 99.99th=[18220] 00:20:12.545 bw ( KiB/s): min=66912, max=94112, per=49.69%, avg=78520.00, stdev=12990.09, samples=4 00:20:12.545 iops : min= 4182, max= 5882, avg=4907.50, stdev=811.88, samples=4 00:20:12.545 write: IOPS=5663, BW=88.5MiB/s (92.8MB/s)(160MiB/1812msec); 0 zone resets 00:20:12.545 slat (usec): min=28, max=430, avg=30.57, stdev= 8.58 00:20:12.545 clat (usec): min=5104, max=23566, avg=9976.02, stdev=2342.24 00:20:12.545 lat (usec): min=5136, max=23595, avg=10006.59, stdev=2343.75 00:20:12.545 clat percentiles (usec): 00:20:12.545 | 1.00th=[ 5997], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 7963], 00:20:12.545 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10159], 00:20:12.545 | 70.00th=[10814], 80.00th=[11731], 90.00th=[13042], 95.00th=[14353], 00:20:12.545 | 99.00th=[16712], 99.50th=[17433], 99.90th=[22676], 99.95th=[23200], 00:20:12.545 | 99.99th=[23462] 00:20:12.545 bw ( KiB/s): min=69280, max=98304, per=90.11%, avg=81664.00, stdev=13922.49, samples=4 00:20:12.545 iops : min= 4330, max= 6144, avg=5104.00, stdev=870.16, samples=4 00:20:12.545 lat (msec) : 2=0.03%, 4=3.97%, 10=72.67%, 20=23.26%, 50=0.07% 00:20:12.545 cpu : usr=79.76%, sys=16.60%, ctx=2, majf=0, minf=3 00:20:12.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:20:12.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:12.545 issued rwts: total=19821,10263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:12.545 00:20:12.545 Run status group 0 (all jobs): 00:20:12.545 READ: bw=154MiB/s (162MB/s), 154MiB/s-154MiB/s (162MB/s-162MB/s), io=310MiB (325MB), run=2007-2007msec 00:20:12.545 WRITE: bw=88.5MiB/s (92.8MB/s), 88.5MiB/s-88.5MiB/s (92.8MB/s-92.8MB/s), io=160MiB (168MB), run=1812-1812msec 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:12.545 rmmod nvme_tcp 00:20:12.545 rmmod nvme_fabrics 00:20:12.545 rmmod nvme_keyring 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 74762 ']' 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 74762 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74762 ']' 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74762 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.545 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74762 00:20:12.803 killing process with pid 74762 00:20:12.803 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.803 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.803 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74762' 00:20:12.803 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74762 00:20:12.803 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74762 00:20:13.063 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:13.063 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:20:13.063 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@254 -- # local dev 00:20:13.063 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:13.063 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:13.063 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:13.063 11:03:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # continue 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # continue 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@274 -- # iptr 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-save 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-restore 00:20:13.063 00:20:13.063 real 0m9.221s 00:20:13.063 user 0m35.030s 00:20:13.063 sys 0m3.024s 00:20:13.063 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.064 ************************************ 00:20:13.064 END TEST nvmf_fio_host 00:20:13.064 ************************************ 00:20:13.064 11:03:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.323 ************************************ 00:20:13.323 START TEST nvmf_failover 00:20:13.323 ************************************ 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:13.323 * Looking for test storage... 00:20:13.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:13.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.323 --rc genhtml_branch_coverage=1 00:20:13.323 --rc genhtml_function_coverage=1 00:20:13.323 --rc genhtml_legend=1 00:20:13.323 --rc geninfo_all_blocks=1 00:20:13.323 --rc geninfo_unexecuted_blocks=1 00:20:13.323 00:20:13.323 ' 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:13.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.323 --rc genhtml_branch_coverage=1 00:20:13.323 --rc genhtml_function_coverage=1 00:20:13.323 --rc genhtml_legend=1 00:20:13.323 --rc geninfo_all_blocks=1 00:20:13.323 --rc geninfo_unexecuted_blocks=1 00:20:13.323 00:20:13.323 ' 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:13.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.323 --rc genhtml_branch_coverage=1 00:20:13.323 --rc genhtml_function_coverage=1 00:20:13.323 --rc genhtml_legend=1 00:20:13.323 --rc geninfo_all_blocks=1 00:20:13.323 --rc geninfo_unexecuted_blocks=1 00:20:13.323 00:20:13.323 ' 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:13.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.323 --rc genhtml_branch_coverage=1 00:20:13.323 --rc genhtml_function_coverage=1 00:20:13.323 --rc genhtml_legend=1 00:20:13.323 --rc geninfo_all_blocks=1 00:20:13.323 --rc geninfo_unexecuted_blocks=1 00:20:13.323 00:20:13.323 ' 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.323 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:13.584 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@223 -- # create_target_ns 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:13.584 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # return 0 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up target0 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:13.585 10.0.0.1 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:13.585 10.0.0.2 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:13.585 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:13.586 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up target1 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772163 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:13.845 10.0.0.3 00:20:13.845 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772164 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:13.846 10.0.0.4 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator0 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:13.846 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:13.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:20:13.847 00:20:13.847 --- 10.0.0.1 ping statistics --- 00:20:13.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.847 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target0 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target0 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:13.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:20:13.847 00:20:13.847 --- 10.0.0.2 ping statistics --- 00:20:13.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.847 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:13.847 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.847 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:20:13.847 00:20:13.847 --- 10.0.0.3 ping statistics --- 00:20:13.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.847 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:13.847 11:03:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target1 00:20:13.847 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target1 00:20:13.847 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:13.847 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:14.108 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:14.108 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:20:14.108 00:20:14.108 --- 10.0.0.4 ping statistics --- 00:20:14.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.108 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # return 0 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator0 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.108 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target0 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target0 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target1 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target1 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=75154 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 75154 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75154 ']' 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.109 11:03:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:14.109 [2024-12-05 11:03:41.216333] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:20:14.109 [2024-12-05 11:03:41.216402] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.367 [2024-12-05 11:03:41.368812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.367 [2024-12-05 11:03:41.419496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.367 [2024-12-05 11:03:41.419538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.367 [2024-12-05 11:03:41.419548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.367 [2024-12-05 11:03:41.419557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.367 [2024-12-05 11:03:41.419563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.367 [2024-12-05 11:03:41.420858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.367 [2024-12-05 11:03:41.421217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.367 [2024-12-05 11:03:41.421218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.367 [2024-12-05 11:03:41.463256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:15.300 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.300 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:15.300 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:15.300 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:15.300 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:15.300 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.300 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:15.300 [2024-12-05 11:03:42.355461] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.300 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:15.558 Malloc0 00:20:15.558 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.815 11:03:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:16.072 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.330 [2024-12-05 11:03:43.272400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.330 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:16.587 [2024-12-05 11:03:43.492146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:16.587 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:16.587 [2024-12-05 11:03:43.719970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:16.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75206 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75206 /var/tmp/bdevperf.sock 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75206 ']' 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.588 11:03:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:17.151 11:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.152 11:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:17.152 11:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:17.409 NVMe0n1 00:20:17.409 11:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:17.739 00:20:17.739 11:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75222 00:20:17.739 11:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:17.739 11:03:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:18.683 11:03:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.940 11:03:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:20:22.225 11:03:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:22.225 00:20:22.225 11:03:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:22.484 [2024-12-05 11:03:49.482284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c18970 is same with the state(6) to be set 00:20:22.484 11:03:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:20:25.769 11:03:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.769 [2024-12-05 11:03:52.711128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.769 11:03:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:20:26.706 11:03:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:26.965 11:03:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75222 00:20:33.558 { 00:20:33.558 "results": [ 00:20:33.558 { 00:20:33.558 "job": "NVMe0n1", 00:20:33.558 "core_mask": "0x1", 00:20:33.558 "workload": "verify", 00:20:33.558 "status": "finished", 00:20:33.558 "verify_range": { 00:20:33.558 "start": 0, 00:20:33.558 "length": 16384 00:20:33.558 }, 00:20:33.558 "queue_depth": 128, 00:20:33.558 "io_size": 4096, 00:20:33.558 "runtime": 15.007869, 00:20:33.558 "iops": 10693.456879187845, 00:20:33.558 "mibps": 41.77131593432752, 00:20:33.558 "io_failed": 4037, 00:20:33.558 "io_timeout": 0, 00:20:33.558 "avg_latency_us": 11652.737456656676, 00:20:33.558 "min_latency_us": 467.174297188755, 00:20:33.558 "max_latency_us": 26003.842570281126 00:20:33.558 } 00:20:33.559 ], 00:20:33.559 "core_count": 1 00:20:33.559 } 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75206 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75206 ']' 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75206 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75206 00:20:33.559 killing process with pid 75206 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75206' 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75206 00:20:33.559 11:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75206 00:20:33.559 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:33.559 [2024-12-05 11:03:43.788442] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:20:33.559 [2024-12-05 11:03:43.788537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75206 ] 00:20:33.559 [2024-12-05 11:03:43.926625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.559 [2024-12-05 11:03:43.983431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.559 [2024-12-05 11:03:44.025797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:33.559 Running I/O for 15 seconds... 00:20:33.559 11328.00 IOPS, 44.25 MiB/s [2024-12-05T11:04:00.718Z] [2024-12-05 11:03:45.954353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.559 [2024-12-05 11:03:45.954419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.559 [2024-12-05 11:03:45.954461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.559 [2024-12-05 11:03:45.954491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.559 [2024-12-05 11:03:45.954521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.559 [2024-12-05 11:03:45.954551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.559 [2024-12-05 11:03:45.954580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.559 [2024-12-05 11:03:45.954611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.559 [2024-12-05 11:03:45.954641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.954970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.954996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:101456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.955010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.955025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.955038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.955053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.955069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.955084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.955098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.955113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.955127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.955152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.559 [2024-12-05 11:03:45.955165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.559 [2024-12-05 11:03:45.955180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.560 [2024-12-05 11:03:45.955194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.560 [2024-12-05 11:03:45.955223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.560 [2024-12-05 11:03:45.955251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.560 [2024-12-05 11:03:45.955279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.560 [2024-12-05 11:03:45.955317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.560 [2024-12-05 11:03:45.955346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.560 [2024-12-05 11:03:45.955375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.560 [2024-12-05 11:03:45.955403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.955983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.955996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.956011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.956024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.956040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.956053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.956068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.956081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.956096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.560 [2024-12-05 11:03:45.956109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.560 [2024-12-05 11:03:45.956125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.956584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.956977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.956991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.957012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.957025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.957040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.561 [2024-12-05 11:03:45.957054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.561 [2024-12-05 11:03:45.957069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.561 [2024-12-05 11:03:45.957083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.562 [2024-12-05 11:03:45.957129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.562 [2024-12-05 11:03:45.957158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.562 [2024-12-05 11:03:45.957187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.562 [2024-12-05 11:03:45.957216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.562 [2024-12-05 11:03:45.957246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.562 [2024-12-05 11:03:45.957275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.562 [2024-12-05 11:03:45.957313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.562 [2024-12-05 11:03:45.957342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.562 [2024-12-05 11:03:45.957374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.562 [2024-12-05 11:03:45.957409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.562 [2024-12-05 11:03:45.957439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.562 [2024-12-05 11:03:45.957468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.562 [2024-12-05 11:03:45.957498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.562 [2024-12-05 11:03:45.957527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afcf10 is same with the state(6) to be set 00:20:33.562 [2024-12-05 11:03:45.957560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101880 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.957595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102208 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.957644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102216 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.957693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102224 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.957741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102232 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.957798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102240 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.957847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102248 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.957896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102256 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.957953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.957967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.957977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.957988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102264 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.958002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.958016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.958026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.958037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102272 len:8 PRP1 0x0 PRP2 0x0 00:20:33.562 [2024-12-05 11:03:45.958050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.562 [2024-12-05 11:03:45.958065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.562 [2024-12-05 11:03:45.958074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.562 [2024-12-05 11:03:45.958085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102280 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102288 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102296 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102304 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102312 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102320 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102328 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102336 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102344 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102352 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102360 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102368 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102376 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102384 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.563 [2024-12-05 11:03:45.958776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.563 [2024-12-05 11:03:45.958787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102392 len:8 PRP1 0x0 PRP2 0x0 00:20:33.563 [2024-12-05 11:03:45.958800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958858] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:33.563 [2024-12-05 11:03:45.958914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.563 [2024-12-05 11:03:45.958931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.563 [2024-12-05 11:03:45.958961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.958975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.563 [2024-12-05 11:03:45.958989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.959005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.563 [2024-12-05 11:03:45.959019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:45.959040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:33.563 [2024-12-05 11:03:45.962086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:33.563 [2024-12-05 11:03:45.962131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dc60 (9): Bad file descriptor 00:20:33.563 [2024-12-05 11:03:45.989027] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:33.563 11127.50 IOPS, 43.47 MiB/s [2024-12-05T11:04:00.722Z] 11320.33 IOPS, 44.22 MiB/s [2024-12-05T11:04:00.722Z] 11447.25 IOPS, 44.72 MiB/s [2024-12-05T11:04:00.722Z] [2024-12-05 11:03:49.482725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.563 [2024-12-05 11:03:49.482781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.563 [2024-12-05 11:03:49.482803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.482817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.482832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.482846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.482860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.482873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.482887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.482900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.482914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.482927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.482941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.482954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.482968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.482981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.482995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.483007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.483034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.564 [2024-12-05 11:03:49.483516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.483543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.483570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.564 [2024-12-05 11:03:49.483584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.564 [2024-12-05 11:03:49.483597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.483952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.565 [2024-12-05 11:03:49.483980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.483994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.565 [2024-12-05 11:03:49.484007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.565 [2024-12-05 11:03:49.484034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.565 [2024-12-05 11:03:49.484061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.565 [2024-12-05 11:03:49.484089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.565 [2024-12-05 11:03:49.484116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.565 [2024-12-05 11:03:49.484148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.565 [2024-12-05 11:03:49.484176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.565 [2024-12-05 11:03:49.484486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.565 [2024-12-05 11:03:49.484506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.484736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.484984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.484998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.485011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.485038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.485065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.485092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.485119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.485147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.566 [2024-12-05 11:03:49.485174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.485205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.485233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.485259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.485294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.485321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.485348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.485375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.566 [2024-12-05 11:03:49.485401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.566 [2024-12-05 11:03:49.485415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.567 [2024-12-05 11:03:49.485428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.567 [2024-12-05 11:03:49.485462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.567 [2024-12-05 11:03:49.485492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.567 [2024-12-05 11:03:49.485520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-12-05 11:03:49.485546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-12-05 11:03:49.485579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-12-05 11:03:49.485605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-12-05 11:03:49.485632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-12-05 11:03:49.485659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-12-05 11:03:49.485686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-12-05 11:03:49.485714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b01370 is same with the state(6) to be set 00:20:33.567 [2024-12-05 11:03:49.485744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.485753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.485763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31536 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.485776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.485800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.485810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31992 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.485822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.485845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.485854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32000 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.485869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.485891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.485902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32008 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.485919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.485942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.485960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32016 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.485973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.485986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.485995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.486005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32024 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.486017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.486030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.486039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.486049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32032 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.486061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.486074] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.486083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.486093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32040 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.486105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.486118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.486127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.486137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32048 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.486150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.486163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.486172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.486182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32056 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.486194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.486207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.486216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.486226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32064 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.486240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.486253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.486262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.486287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32072 len:8 PRP1 0x0 PRP2 0x0 00:20:33.567 [2024-12-05 11:03:49.486301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.567 [2024-12-05 11:03:49.486314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.567 [2024-12-05 11:03:49.486324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.567 [2024-12-05 11:03:49.486333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32080 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32088 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32096 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32104 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32112 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32120 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32128 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32136 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32144 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32152 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.568 [2024-12-05 11:03:49.486773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.568 [2024-12-05 11:03:49.486783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32160 len:8 PRP1 0x0 PRP2 0x0 00:20:33.568 [2024-12-05 11:03:49.486796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486848] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:33.568 [2024-12-05 11:03:49.486898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.568 [2024-12-05 11:03:49.486913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.568 [2024-12-05 11:03:49.486940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.568 [2024-12-05 11:03:49.486965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.486978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.568 [2024-12-05 11:03:49.500186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:49.500214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:33.568 [2024-12-05 11:03:49.500254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dc60 (9): Bad file descriptor 00:20:33.568 [2024-12-05 11:03:49.503875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:33.568 [2024-12-05 11:03:49.532766] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:20:33.568 10761.80 IOPS, 42.04 MiB/s [2024-12-05T11:04:00.727Z] 10200.50 IOPS, 39.85 MiB/s [2024-12-05T11:04:00.727Z] 9819.00 IOPS, 38.36 MiB/s [2024-12-05T11:04:00.727Z] 9929.62 IOPS, 38.79 MiB/s [2024-12-05T11:04:00.727Z] 10122.33 IOPS, 39.54 MiB/s [2024-12-05T11:04:00.727Z] [2024-12-05 11:03:53.933204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.568 [2024-12-05 11:03:53.933301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:53.933328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.568 [2024-12-05 11:03:53.933342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:53.933357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.568 [2024-12-05 11:03:53.933371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:53.933385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.568 [2024-12-05 11:03:53.933398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:53.933412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.568 [2024-12-05 11:03:53.933426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:53.933440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.568 [2024-12-05 11:03:53.933470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:53.933485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.568 [2024-12-05 11:03:53.933499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.568 [2024-12-05 11:03:53.933513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.568 [2024-12-05 11:03:53.933527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.933555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.933584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.933613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.933641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.933699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.933728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.933757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.933785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.569 [2024-12-05 11:03:53.933814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.569 [2024-12-05 11:03:53.933845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.569 [2024-12-05 11:03:53.933874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.569 [2024-12-05 11:03:53.933904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.569 [2024-12-05 11:03:53.933932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.933970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.569 [2024-12-05 11:03:53.934000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.569 [2024-12-05 11:03:53.934029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.569 [2024-12-05 11:03:53.934057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.569 [2024-12-05 11:03:53.934486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.569 [2024-12-05 11:03:53.934500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.934600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.934627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.934670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.934700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.934728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.934756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.934785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.934814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.934978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.934992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.570 [2024-12-05 11:03:53.935406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.570 [2024-12-05 11:03:53.935435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.570 [2024-12-05 11:03:53.935450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.935873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.935902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.935930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.935959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.935974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.935993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.571 [2024-12-05 11:03:53.936293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.936322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.936350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.936384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.571 [2024-12-05 11:03:53.936401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.571 [2024-12-05 11:03:53.936415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.572 [2024-12-05 11:03:53.936443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.572 [2024-12-05 11:03:53.936472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.572 [2024-12-05 11:03:53.936501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afd9c0 is same with the state(6) to be set 00:20:33.572 [2024-12-05 11:03:53.936534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106320 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106920 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106928 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106936 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106944 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106952 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106960 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106968 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106976 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.936957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.936972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.936981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.936992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106984 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.937005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.937018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.937028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.937038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106992 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.937052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.937072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.937081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.937092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107000 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.937107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.937121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.937136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.937146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107008 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.937160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.937174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.937183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.937194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107016 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.937207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.937221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.937230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.937241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107024 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.937254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.937267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.937285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.937296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106328 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.937309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.937323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.572 [2024-12-05 11:03:53.937333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.572 [2024-12-05 11:03:53.937343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106336 len:8 PRP1 0x0 PRP2 0x0 00:20:33.572 [2024-12-05 11:03:53.937356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.572 [2024-12-05 11:03:53.937370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.573 [2024-12-05 11:03:53.937380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.573 [2024-12-05 11:03:53.937390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106344 len:8 PRP1 0x0 PRP2 0x0 00:20:33.573 [2024-12-05 11:03:53.937403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.573 [2024-12-05 11:03:53.937427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.573 [2024-12-05 11:03:53.937437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106352 len:8 PRP1 0x0 PRP2 0x0 00:20:33.573 [2024-12-05 11:03:53.937450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.573 [2024-12-05 11:03:53.937475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.573 [2024-12-05 11:03:53.937485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106360 len:8 PRP1 0x0 PRP2 0x0 00:20:33.573 [2024-12-05 11:03:53.937500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.573 [2024-12-05 11:03:53.937529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.573 [2024-12-05 11:03:53.937540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106368 len:8 PRP1 0x0 PRP2 0x0 00:20:33.573 [2024-12-05 11:03:53.937553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.573 [2024-12-05 11:03:53.937576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.573 [2024-12-05 11:03:53.937586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106376 len:8 PRP1 0x0 PRP2 0x0 00:20:33.573 [2024-12-05 11:03:53.937600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.573 [2024-12-05 11:03:53.937623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.573 [2024-12-05 11:03:53.937633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106384 len:8 PRP1 0x0 PRP2 0x0 00:20:33.573 [2024-12-05 11:03:53.937646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937702] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:33.573 [2024-12-05 11:03:53.937768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.573 [2024-12-05 11:03:53.937784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.573 [2024-12-05 11:03:53.937811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.573 [2024-12-05 11:03:53.937837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.937851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.573 [2024-12-05 11:03:53.953658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.573 [2024-12-05 11:03:53.953718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:33.573 [2024-12-05 11:03:53.953799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8dc60 (9): Bad file descriptor 00:20:33.573 [2024-12-05 11:03:53.957848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:33.573 [2024-12-05 11:03:53.982177] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:20:33.573 10218.80 IOPS, 39.92 MiB/s [2024-12-05T11:04:00.732Z] 10354.00 IOPS, 40.45 MiB/s [2024-12-05T11:04:00.732Z] 10481.08 IOPS, 40.94 MiB/s [2024-12-05T11:04:00.732Z] 10590.54 IOPS, 41.37 MiB/s [2024-12-05T11:04:00.732Z] 10663.71 IOPS, 41.66 MiB/s [2024-12-05T11:04:00.732Z] 10692.67 IOPS, 41.77 MiB/s 00:20:33.573 Latency(us) 00:20:33.573 [2024-12-05T11:04:00.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.573 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:33.573 Verification LBA range: start 0x0 length 0x4000 00:20:33.573 NVMe0n1 : 15.01 10693.46 41.77 268.99 0.00 11652.74 467.17 26003.84 00:20:33.573 [2024-12-05T11:04:00.732Z] =================================================================================================================== 00:20:33.573 [2024-12-05T11:04:00.732Z] Total : 10693.46 41.77 268.99 0.00 11652.74 467.17 26003.84 00:20:33.573 Received shutdown signal, test time was about 15.000000 seconds 00:20:33.573 00:20:33.573 Latency(us) 00:20:33.573 [2024-12-05T11:04:00.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.573 [2024-12-05T11:04:00.732Z] =================================================================================================================== 00:20:33.573 [2024-12-05T11:04:00.732Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75402 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75402 /var/tmp/bdevperf.sock 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75402 ']' 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.573 11:04:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:34.139 11:04:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.139 11:04:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:34.139 11:04:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:34.139 [2024-12-05 11:04:01.271812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:34.139 11:04:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:34.396 [2024-12-05 11:04:01.491678] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:34.397 11:04:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:34.655 NVMe0n1 00:20:34.655 11:04:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:34.914 00:20:35.176 11:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:35.434 00:20:35.434 11:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:35.434 11:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:35.692 11:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:35.951 11:04:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:39.230 11:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:39.230 11:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:39.230 11:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.230 11:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75479 00:20:39.230 11:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75479 00:20:40.163 { 00:20:40.163 "results": [ 00:20:40.163 { 00:20:40.163 "job": "NVMe0n1", 00:20:40.163 "core_mask": "0x1", 00:20:40.163 "workload": "verify", 00:20:40.163 "status": "finished", 00:20:40.163 "verify_range": { 00:20:40.163 "start": 0, 00:20:40.163 "length": 16384 00:20:40.163 }, 00:20:40.163 "queue_depth": 128, 00:20:40.163 "io_size": 4096, 00:20:40.163 "runtime": 1.018831, 00:20:40.163 "iops": 7131.702902640379, 00:20:40.163 "mibps": 27.85821446343898, 00:20:40.163 "io_failed": 0, 00:20:40.163 "io_timeout": 0, 00:20:40.163 "avg_latency_us": 17886.763640303023, 00:20:40.163 "min_latency_us": 1855.5373493975903, 00:20:40.163 "max_latency_us": 15581.249799196787 00:20:40.163 } 00:20:40.163 ], 00:20:40.163 "core_count": 1 00:20:40.163 } 00:20:40.421 11:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:40.421 [2024-12-05 11:04:00.110936] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:20:40.421 [2024-12-05 11:04:00.111647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75402 ] 00:20:40.421 [2024-12-05 11:04:00.264225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.421 [2024-12-05 11:04:00.321227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.421 [2024-12-05 11:04:00.364875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:40.421 [2024-12-05 11:04:02.852843] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:40.421 [2024-12-05 11:04:02.853386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.422 [2024-12-05 11:04:02.853486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.422 [2024-12-05 11:04:02.853554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.422 [2024-12-05 11:04:02.853614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.422 [2024-12-05 11:04:02.853664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.422 [2024-12-05 11:04:02.853726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.422 [2024-12-05 11:04:02.853775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:40.422 [2024-12-05 11:04:02.853841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.422 [2024-12-05 11:04:02.853896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:20:40.422 [2024-12-05 11:04:02.854037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:20:40.422 [2024-12-05 11:04:02.854117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b0c60 (9): Bad file descriptor 00:20:40.422 [2024-12-05 11:04:02.861349] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:20:40.422 Running I/O for 1 seconds... 00:20:40.422 7138.00 IOPS, 27.88 MiB/s 00:20:40.422 Latency(us) 00:20:40.422 [2024-12-05T11:04:07.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.422 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:40.422 Verification LBA range: start 0x0 length 0x4000 00:20:40.422 NVMe0n1 : 1.02 7131.70 27.86 0.00 0.00 17886.76 1855.54 15581.25 00:20:40.422 [2024-12-05T11:04:07.581Z] =================================================================================================================== 00:20:40.422 [2024-12-05T11:04:07.581Z] Total : 7131.70 27.86 0.00 0.00 17886.76 1855.54 15581.25 00:20:40.422 11:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:40.422 11:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:40.679 11:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:40.679 11:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:40.679 11:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:40.937 11:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:41.196 11:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75402 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75402 ']' 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75402 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75402 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75402' 00:20:44.550 killing process with pid 75402 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75402 00:20:44.550 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75402 00:20:44.808 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:44.808 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:45.066 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:45.066 11:04:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:45.066 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:45.066 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:45.066 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:20:45.066 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:45.066 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:20:45.066 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:45.067 rmmod nvme_tcp 00:20:45.067 rmmod nvme_fabrics 00:20:45.067 rmmod nvme_keyring 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 75154 ']' 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 75154 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75154 ']' 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75154 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75154 00:20:45.067 killing process with pid 75154 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75154' 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75154 00:20:45.067 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75154 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@254 -- # local dev 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:45.326 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # continue 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # continue 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@274 -- # iptr 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-save 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-restore 00:20:45.585 00:20:45.585 real 0m32.262s 00:20:45.585 user 2m1.528s 00:20:45.585 sys 0m6.828s 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:45.585 ************************************ 00:20:45.585 END TEST nvmf_failover 00:20:45.585 ************************************ 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.585 ************************************ 00:20:45.585 START TEST nvmf_host_discovery 00:20:45.585 ************************************ 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:45.585 * Looking for test storage... 00:20:45.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:45.585 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:45.586 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:45.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.846 --rc genhtml_branch_coverage=1 00:20:45.846 --rc genhtml_function_coverage=1 00:20:45.846 --rc genhtml_legend=1 00:20:45.846 --rc geninfo_all_blocks=1 00:20:45.846 --rc geninfo_unexecuted_blocks=1 00:20:45.846 00:20:45.846 ' 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:45.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.846 --rc genhtml_branch_coverage=1 00:20:45.846 --rc genhtml_function_coverage=1 00:20:45.846 --rc genhtml_legend=1 00:20:45.846 --rc geninfo_all_blocks=1 00:20:45.846 --rc geninfo_unexecuted_blocks=1 00:20:45.846 00:20:45.846 ' 00:20:45.846 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:45.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.847 --rc genhtml_branch_coverage=1 00:20:45.847 --rc genhtml_function_coverage=1 00:20:45.847 --rc genhtml_legend=1 00:20:45.847 --rc geninfo_all_blocks=1 00:20:45.847 --rc geninfo_unexecuted_blocks=1 00:20:45.847 00:20:45.847 ' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:45.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.847 --rc genhtml_branch_coverage=1 00:20:45.847 --rc genhtml_function_coverage=1 00:20:45.847 --rc genhtml_legend=1 00:20:45.847 --rc geninfo_all_blocks=1 00:20:45.847 --rc geninfo_unexecuted_blocks=1 00:20:45.847 00:20:45.847 ' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:45.847 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@223 -- # create_target_ns 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:45.847 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # return 0 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up target0 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:45.848 10.0.0.1 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:45.848 10.0.0.2 00:20:45.848 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:45.849 11:04:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:45.849 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:45.849 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:45.849 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:46.109 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up target1 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772163 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:46.110 10.0.0.3 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772164 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:46.110 10.0.0.4 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:46.110 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:20:46.111 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:46.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:20:46.371 00:20:46.371 --- 10.0.0.1 ping statistics --- 00:20:46.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.371 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target0 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:46.371 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:46.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:46.372 00:20:46.372 --- 10.0.0.2 ping statistics --- 00:20:46.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.372 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:46.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:20:46.372 00:20:46.372 --- 10.0.0.3 ping statistics --- 00:20:46.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.372 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:46.372 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:46.372 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.118 ms 00:20:46.372 00:20:46.372 --- 10.0.0.4 ping statistics --- 00:20:46.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.372 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # return 0 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:46.372 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target0 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target1 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=75806 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 75806 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75806 ']' 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.373 11:04:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:46.694 [2024-12-05 11:04:13.557011] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:20:46.694 [2024-12-05 11:04:13.557092] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.694 [2024-12-05 11:04:13.703092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.694 [2024-12-05 11:04:13.756774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.694 [2024-12-05 11:04:13.756824] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.694 [2024-12-05 11:04:13.756835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.694 [2024-12-05 11:04:13.756843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.694 [2024-12-05 11:04:13.756850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.694 [2024-12-05 11:04:13.757144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.694 [2024-12-05 11:04:13.803903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.651 [2024-12-05 11:04:14.527386] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.651 [2024-12-05 11:04:14.539601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.651 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.651 null0 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 null1 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75838 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75838 /tmp/host.sock 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75838 ']' 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:47.652 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.652 11:04:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:47.652 [2024-12-05 11:04:14.633941] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:20:47.652 [2024-12-05 11:04:14.634018] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75838 ] 00:20:47.652 [2024-12-05 11:04:14.766759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.910 [2024-12-05 11:04:14.815060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.910 [2024-12-05 11:04:14.857247] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:48.478 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.757 [2024-12-05 11:04:15.873554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:48.757 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.016 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:49.016 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:49.016 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:49.016 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:49.016 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:49.016 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:49.016 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.016 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.017 11:04:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:20:49.017 11:04:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:20:49.584 [2024-12-05 11:04:16.566768] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:49.584 [2024-12-05 11:04:16.566800] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:49.584 [2024-12-05 11:04:16.566820] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:49.584 [2024-12-05 11:04:16.572795] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:49.584 [2024-12-05 11:04:16.627578] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:20:49.584 [2024-12-05 11:04:16.628563] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x195dda0:1 started. 00:20:49.584 [2024-12-05 11:04:16.630377] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:49.584 [2024-12-05 11:04:16.630537] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:49.584 [2024-12-05 11:04:16.635509] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x195dda0 was disconnected and freed. delete nvme_qpair. 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:50.151 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:50.152 [2024-12-05 11:04:17.307933] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x196c2a0:1 started. 00:20:50.152 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 [2024-12-05 11:04:17.314781] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x196c2a0 was disconnected and freed. delete nvme_qpair. 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 [2024-12-05 11:04:17.428295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:50.412 [2024-12-05 11:04:17.429362] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:50.412 [2024-12-05 11:04:17.429521] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:50.412 [2024-12-05 11:04:17.435331] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:50.412 [2024-12-05 11:04:17.499610] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:20:50.412 [2024-12-05 11:04:17.499662] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:50.412 [2024-12-05 11:04:17.499673] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:50.412 [2024-12-05 11:04:17.499680] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:50.412 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:50.413 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.413 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.413 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:50.413 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.672 [2024-12-05 11:04:17.664741] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:50.672 [2024-12-05 11:04:17.664901] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:50.672 [2024-12-05 11:04:17.666067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.672 [2024-12-05 11:04:17.666101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.672 [2024-12-05 11:04:17.666114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.672 [2024-12-05 11:04:17.666124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.672 [2024-12-05 11:04:17.666134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.672 [2024-12-05 11:04:17.666144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.672 [2024-12-05 11:04:17.666154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:50.672 [2024-12-05 11:04:17.666163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:50.672 [2024-12-05 11:04:17.666173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1939fb0 is same with the state(6) to be set 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:50.672 [2024-12-05 11:04:17.670724] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:50.672 [2024-12-05 11:04:17.670747] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:50.672 [2024-12-05 11:04:17.670798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1939fb0 (9): Bad file descriptor 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.672 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:50.673 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.932 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.933 11:04:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:50.933 11:04:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.933 11:04:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:50.933 11:04:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:50.933 11:04:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:50.933 11:04:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:50.933 11:04:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:50.933 11:04:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.933 11:04:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.310 [2024-12-05 11:04:19.043480] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:52.310 [2024-12-05 11:04:19.043648] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:52.310 [2024-12-05 11:04:19.043709] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:52.310 [2024-12-05 11:04:19.049497] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:20:52.310 [2024-12-05 11:04:19.107856] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:20:52.310 [2024-12-05 11:04:19.108854] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1933dd0:1 started. 00:20:52.310 [2024-12-05 11:04:19.111087] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:52.310 [2024-12-05 11:04:19.111247] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:52.311 [2024-12-05 11:04:19.112555] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1933dd0 was disconnected and freed. delete nvme_qpair. 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.311 request: 00:20:52.311 { 00:20:52.311 "name": "nvme", 00:20:52.311 "trtype": "tcp", 00:20:52.311 "traddr": "10.0.0.2", 00:20:52.311 "adrfam": "ipv4", 00:20:52.311 "trsvcid": "8009", 00:20:52.311 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:52.311 "wait_for_attach": true, 00:20:52.311 "method": "bdev_nvme_start_discovery", 00:20:52.311 "req_id": 1 00:20:52.311 } 00:20:52.311 Got JSON-RPC error response 00:20:52.311 response: 00:20:52.311 { 00:20:52.311 "code": -17, 00:20:52.311 "message": "File exists" 00:20:52.311 } 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.311 request: 00:20:52.311 { 00:20:52.311 "name": "nvme_second", 00:20:52.311 "trtype": "tcp", 00:20:52.311 "traddr": "10.0.0.2", 00:20:52.311 "adrfam": "ipv4", 00:20:52.311 "trsvcid": "8009", 00:20:52.311 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:52.311 "wait_for_attach": true, 00:20:52.311 "method": "bdev_nvme_start_discovery", 00:20:52.311 "req_id": 1 00:20:52.311 } 00:20:52.311 Got JSON-RPC error response 00:20:52.311 response: 00:20:52.311 { 00:20:52.311 "code": -17, 00:20:52.311 "message": "File exists" 00:20:52.311 } 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:52.311 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.312 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:52.312 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.312 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:52.312 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.312 11:04:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.702 [2024-12-05 11:04:20.413490] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.702 [2024-12-05 11:04:20.413778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1966890 with addr=10.0.0.2, port=8010 00:20:53.702 [2024-12-05 11:04:20.413952] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:53.702 [2024-12-05 11:04:20.413998] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:53.702 [2024-12-05 11:04:20.414086] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:54.267 [2024-12-05 11:04:21.411856] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:54.267 [2024-12-05 11:04:21.412088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1966890 with addr=10.0.0.2, port=8010 00:20:54.267 [2024-12-05 11:04:21.412267] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:54.267 [2024-12-05 11:04:21.412367] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:54.267 [2024-12-05 11:04:21.412401] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:55.674 [2024-12-05 11:04:22.410099] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:20:55.674 request: 00:20:55.674 { 00:20:55.674 "name": "nvme_second", 00:20:55.674 "trtype": "tcp", 00:20:55.674 "traddr": "10.0.0.2", 00:20:55.674 "adrfam": "ipv4", 00:20:55.674 "trsvcid": "8010", 00:20:55.674 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:55.674 "wait_for_attach": false, 00:20:55.674 "attach_timeout_ms": 3000, 00:20:55.674 "method": "bdev_nvme_start_discovery", 00:20:55.674 "req_id": 1 00:20:55.674 } 00:20:55.674 Got JSON-RPC error response 00:20:55.674 response: 00:20:55.674 { 00:20:55.674 "code": -110, 00:20:55.674 "message": "Connection timed out" 00:20:55.674 } 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75838 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:55.674 rmmod nvme_tcp 00:20:55.674 rmmod nvme_fabrics 00:20:55.674 rmmod nvme_keyring 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 75806 ']' 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 75806 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75806 ']' 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75806 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75806 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75806' 00:20:55.674 killing process with pid 75806 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75806 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75806 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@254 -- # local dev 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:55.674 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # continue 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # continue 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@274 -- # iptr 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-save 00:20:55.933 00:20:55.933 real 0m10.402s 00:20:55.933 user 0m18.561s 00:20:55.933 sys 0m2.829s 00:20:55.933 ************************************ 00:20:55.933 END TEST nvmf_host_discovery 00:20:55.933 ************************************ 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.933 11:04:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.933 11:04:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:55.933 11:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:55.933 11:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.933 11:04:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.933 ************************************ 00:20:55.933 START TEST nvmf_host_multipath_status 00:20:55.933 ************************************ 00:20:55.933 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:56.192 * Looking for test storage... 00:20:56.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:56.192 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:56.192 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:20:56.192 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:56.192 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:56.192 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:56.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.193 --rc genhtml_branch_coverage=1 00:20:56.193 --rc genhtml_function_coverage=1 00:20:56.193 --rc genhtml_legend=1 00:20:56.193 --rc geninfo_all_blocks=1 00:20:56.193 --rc geninfo_unexecuted_blocks=1 00:20:56.193 00:20:56.193 ' 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:56.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.193 --rc genhtml_branch_coverage=1 00:20:56.193 --rc genhtml_function_coverage=1 00:20:56.193 --rc genhtml_legend=1 00:20:56.193 --rc geninfo_all_blocks=1 00:20:56.193 --rc geninfo_unexecuted_blocks=1 00:20:56.193 00:20:56.193 ' 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:56.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.193 --rc genhtml_branch_coverage=1 00:20:56.193 --rc genhtml_function_coverage=1 00:20:56.193 --rc genhtml_legend=1 00:20:56.193 --rc geninfo_all_blocks=1 00:20:56.193 --rc geninfo_unexecuted_blocks=1 00:20:56.193 00:20:56.193 ' 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:56.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.193 --rc genhtml_branch_coverage=1 00:20:56.193 --rc genhtml_function_coverage=1 00:20:56.193 --rc genhtml_legend=1 00:20:56.193 --rc geninfo_all_blocks=1 00:20:56.193 --rc geninfo_unexecuted_blocks=1 00:20:56.193 00:20:56.193 ' 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.193 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:56.194 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@223 -- # create_target_ns 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # return 0 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:56.194 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up target0 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:56.195 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:56.454 10.0.0.1 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:56.454 10.0.0.2 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:56.454 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up target1 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772163 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:56.455 10.0.0.3 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:56.455 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772164 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:56.456 10.0.0.4 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:56.456 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator0 00:20:56.715 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:56.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:56.716 00:20:56.716 --- 10.0.0.1 ping statistics --- 00:20:56.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.716 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target0 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target0 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:56.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:20:56.716 00:20:56.716 --- 10.0.0.2 ping statistics --- 00:20:56.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.716 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:56.716 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:56.716 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:20:56.716 00:20:56.716 --- 10.0.0.3 ping statistics --- 00:20:56.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.716 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:56.716 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:56.716 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:56.716 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.110 ms 00:20:56.716 00:20:56.716 --- 10.0.0.4 ping statistics --- 00:20:56.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.716 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # return 0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target0 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target1 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:56.717 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=76341 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 76341 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76341 ']' 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.976 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.977 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.977 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.977 11:04:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:56.977 [2024-12-05 11:04:23.942293] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:20:56.977 [2024-12-05 11:04:23.942365] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.977 [2024-12-05 11:04:24.094352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:57.235 [2024-12-05 11:04:24.148996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.235 [2024-12-05 11:04:24.149063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.235 [2024-12-05 11:04:24.149073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.235 [2024-12-05 11:04:24.149083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.235 [2024-12-05 11:04:24.149090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.235 [2024-12-05 11:04:24.150062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.235 [2024-12-05 11:04:24.150065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.235 [2024-12-05 11:04:24.193615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:57.802 11:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.802 11:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:57.802 11:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:57.802 11:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.802 11:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:57.802 11:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.802 11:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76341 00:20:57.802 11:04:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:58.074 [2024-12-05 11:04:25.095165] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.074 11:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:58.332 Malloc0 00:20:58.332 11:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:58.590 11:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.850 11:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.109 [2024-12-05 11:04:26.022508] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:59.109 [2024-12-05 11:04:26.218296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76397 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76397 /var/tmp/bdevperf.sock 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76397 ']' 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:59.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:59.109 11:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:00.107 11:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.107 11:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:00.107 11:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:00.365 11:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:00.623 Nvme0n1 00:21:00.623 11:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:01.189 Nvme0n1 00:21:01.189 11:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:01.189 11:04:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:03.090 11:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:03.090 11:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:03.349 11:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:03.608 11:04:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:04.544 11:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:04.544 11:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:04.544 11:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.544 11:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:04.802 11:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:04.802 11:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:04.802 11:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:04.802 11:04:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:05.060 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:05.060 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:05.060 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.060 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:05.319 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.319 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:05.319 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:05.319 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.632 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.632 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:05.632 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.632 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:05.632 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.632 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:05.632 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:05.632 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:05.890 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:05.890 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:05.890 11:04:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:06.149 11:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:06.410 11:04:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:07.347 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:07.347 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:07.347 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.347 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:07.606 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:07.606 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:07.606 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:07.606 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:07.865 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:07.865 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:07.865 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:07.865 11:04:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.135 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.135 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:08.135 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:08.135 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.397 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.397 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:08.397 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.397 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:08.397 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.397 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:08.397 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:08.397 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:08.656 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:08.656 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:21:08.656 11:04:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:08.916 11:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:09.175 11:04:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:21:10.110 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:21:10.110 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:10.110 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.110 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:10.367 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:10.367 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:10.367 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.367 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:10.769 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:10.769 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:10.769 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:10.769 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:11.033 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.034 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:11.034 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.034 11:04:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:11.034 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.034 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:11.034 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.034 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:11.292 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.292 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:11.292 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:11.292 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:11.550 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:11.550 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:21:11.550 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:12.116 11:04:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:12.116 11:04:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:21:13.490 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:21:13.490 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:13.490 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.490 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:13.490 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:13.490 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:13.490 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.490 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:13.748 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:13.748 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:13.748 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:13.748 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:14.007 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.007 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:14.007 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:14.007 11:04:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.007 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.007 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:14.007 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:14.007 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.267 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:14.267 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:14.267 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:14.267 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:14.527 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:14.527 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:21:14.527 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:14.786 11:04:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:15.044 11:04:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:21:15.981 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:21:15.981 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:15.981 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:15.981 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:16.240 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:16.240 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:16.240 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:16.240 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.500 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:16.500 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:16.500 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.500 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:16.759 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:16.759 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:16.759 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:16.759 11:04:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:17.022 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:17.022 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:17.022 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:17.022 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.280 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.280 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:17.280 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:17.280 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:17.562 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:17.562 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:21:17.562 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:17.821 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:17.821 11:04:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:21:19.194 11:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:21:19.194 11:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:19.194 11:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.194 11:04:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:19.194 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:19.194 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:19.194 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.194 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:19.452 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.452 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:19.452 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.452 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:19.710 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.710 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:19.710 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:19.710 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.968 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:19.968 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:21:19.968 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:19.968 11:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:19.968 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:19.968 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:19.968 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:19.968 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:20.226 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:20.226 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:21:20.484 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:21:20.484 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:20.801 11:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:21.075 11:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:21:22.012 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:21:22.012 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:22.012 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.012 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:22.271 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.271 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:22.271 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.271 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:22.529 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.529 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:22.529 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.529 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:22.789 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.789 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:22.789 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.789 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:22.789 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:22.789 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:22.789 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:22.789 11:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:23.047 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.047 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:23.047 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:23.047 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:23.308 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:23.308 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:21:23.308 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:23.567 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:23.826 11:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:21:24.759 11:04:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:21:24.760 11:04:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:24.760 11:04:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:24.760 11:04:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:25.018 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:25.018 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:25.018 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:25.018 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.277 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.277 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:25.277 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.277 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:25.536 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.536 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:25.536 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.536 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:25.802 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:25.802 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:25.802 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:25.802 11:04:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:26.098 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.098 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:26.098 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:26.098 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:26.357 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:26.357 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:21:26.357 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:26.615 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:21:26.615 11:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:21:27.995 11:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:21:27.995 11:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:27.995 11:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.995 11:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:27.995 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:27.995 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:27.995 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:27.995 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:28.254 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.254 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:28.254 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.254 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:28.513 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.513 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:28.513 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.513 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:28.771 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:28.771 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:28.771 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:28.771 11:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:29.029 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.029 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:29.029 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:29.029 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:29.290 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:29.290 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:21:29.290 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:29.548 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:29.548 11:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:21:30.939 11:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:21:30.940 11:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:30.940 11:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:30.940 11:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.940 11:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:30.940 11:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:30.940 11:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:30.940 11:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:31.224 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:31.224 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:31.224 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:31.224 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.224 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.224 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:31.224 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.224 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:31.483 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.483 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:31.483 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.483 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:31.741 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:31.741 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:21:31.741 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:31.741 11:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76397 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76397 ']' 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76397 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76397 00:21:31.999 killing process with pid 76397 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76397' 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76397 00:21:31.999 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76397 00:21:31.999 { 00:21:31.999 "results": [ 00:21:31.999 { 00:21:31.999 "job": "Nvme0n1", 00:21:31.999 "core_mask": "0x4", 00:21:31.999 "workload": "verify", 00:21:31.999 "status": "terminated", 00:21:31.999 "verify_range": { 00:21:31.999 "start": 0, 00:21:31.999 "length": 16384 00:21:31.999 }, 00:21:31.999 "queue_depth": 128, 00:21:31.999 "io_size": 4096, 00:21:31.999 "runtime": 30.969344, 00:21:31.999 "iops": 10491.665564501463, 00:21:31.999 "mibps": 40.98306861133384, 00:21:31.999 "io_failed": 0, 00:21:31.999 "io_timeout": 0, 00:21:31.999 "avg_latency_us": 12173.651174082024, 00:21:31.999 "min_latency_us": 437.56465863453815, 00:21:31.999 "max_latency_us": 4015751.2995983935 00:21:31.999 } 00:21:31.999 ], 00:21:31.999 "core_count": 1 00:21:31.999 } 00:21:32.261 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76397 00:21:32.261 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:32.261 [2024-12-05 11:04:26.289040] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:21:32.261 [2024-12-05 11:04:26.289137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76397 ] 00:21:32.261 [2024-12-05 11:04:26.476097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.261 [2024-12-05 11:04:26.543796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.261 [2024-12-05 11:04:26.595312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:32.261 Running I/O for 90 seconds... 00:21:32.261 8854.00 IOPS, 34.59 MiB/s [2024-12-05T11:04:59.420Z] 8779.00 IOPS, 34.29 MiB/s [2024-12-05T11:04:59.420Z] 9285.67 IOPS, 36.27 MiB/s [2024-12-05T11:04:59.420Z] 9920.25 IOPS, 38.75 MiB/s [2024-12-05T11:04:59.420Z] 10205.00 IOPS, 39.86 MiB/s [2024-12-05T11:04:59.420Z] 10418.17 IOPS, 40.70 MiB/s [2024-12-05T11:04:59.420Z] 10586.14 IOPS, 41.35 MiB/s [2024-12-05T11:04:59.420Z] 10731.88 IOPS, 41.92 MiB/s [2024-12-05T11:04:59.420Z] 10861.89 IOPS, 42.43 MiB/s [2024-12-05T11:04:59.420Z] 10850.90 IOPS, 42.39 MiB/s [2024-12-05T11:04:59.420Z] 10719.64 IOPS, 41.87 MiB/s [2024-12-05T11:04:59.420Z] 10705.00 IOPS, 41.82 MiB/s [2024-12-05T11:04:59.420Z] 10811.92 IOPS, 42.23 MiB/s [2024-12-05T11:04:59.420Z] [2024-12-05 11:04:41.772810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.772878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.772925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.772940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.772959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.772973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.772991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.261 [2024-12-05 11:04:41.773418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.261 [2024-12-05 11:04:41.773449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.261 [2024-12-05 11:04:41.773481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.261 [2024-12-05 11:04:41.773512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.261 [2024-12-05 11:04:41.773543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.261 [2024-12-05 11:04:41.773574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.261 [2024-12-05 11:04:41.773613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.261 [2024-12-05 11:04:41.773645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.261 [2024-12-05 11:04:41.773664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.261 [2024-12-05 11:04:41.773677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.773964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.773982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.774010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.774042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.774073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.774105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.774136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.774167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.262 [2024-12-05 11:04:41.774198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.262 [2024-12-05 11:04:41.774660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.262 [2024-12-05 11:04:41.774678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.774966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.774979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.263 [2024-12-05 11:04:41.775025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.263 [2024-12-05 11:04:41.775057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.263 [2024-12-05 11:04:41.775088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.263 [2024-12-05 11:04:41.775119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.263 [2024-12-05 11:04:41.775151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.263 [2024-12-05 11:04:41.775182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.263 [2024-12-05 11:04:41.775219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.263 [2024-12-05 11:04:41.775250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:32.263 [2024-12-05 11:04:41.775539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.263 [2024-12-05 11:04:41.775553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.775584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.775622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.775655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.775686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.775717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.775749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.775788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.264 [2024-12-05 11:04:41.775819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.264 [2024-12-05 11:04:41.775855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.264 [2024-12-05 11:04:41.775887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.264 [2024-12-05 11:04:41.775920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.264 [2024-12-05 11:04:41.775952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.775971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.264 [2024-12-05 11:04:41.775984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.264 [2024-12-05 11:04:41.776016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.264 [2024-12-05 11:04:41.776053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.776533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.776546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.777137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.264 [2024-12-05 11:04:41.777160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:32.264 [2024-12-05 11:04:41.777187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:41.777782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:41.777795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:32.265 10569.36 IOPS, 41.29 MiB/s [2024-12-05T11:04:59.424Z] 9864.73 IOPS, 38.53 MiB/s [2024-12-05T11:04:59.424Z] 9248.19 IOPS, 36.13 MiB/s [2024-12-05T11:04:59.424Z] 8704.18 IOPS, 34.00 MiB/s [2024-12-05T11:04:59.424Z] 8460.67 IOPS, 33.05 MiB/s [2024-12-05T11:04:59.424Z] 8603.16 IOPS, 33.61 MiB/s [2024-12-05T11:04:59.424Z] 8818.15 IOPS, 34.45 MiB/s [2024-12-05T11:04:59.424Z] 9158.29 IOPS, 35.77 MiB/s [2024-12-05T11:04:59.424Z] 9437.68 IOPS, 36.87 MiB/s [2024-12-05T11:04:59.424Z] 9619.30 IOPS, 37.58 MiB/s [2024-12-05T11:04:59.424Z] 9705.50 IOPS, 37.91 MiB/s [2024-12-05T11:04:59.424Z] 9781.60 IOPS, 38.21 MiB/s [2024-12-05T11:04:59.424Z] 9912.27 IOPS, 38.72 MiB/s [2024-12-05T11:04:59.424Z] 10095.89 IOPS, 39.44 MiB/s [2024-12-05T11:04:59.424Z] 10281.79 IOPS, 40.16 MiB/s [2024-12-05T11:04:59.424Z] [2024-12-05 11:04:56.662302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:56.662370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:56.662432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:56.662492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.265 [2024-12-05 11:04:56.662524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.265 [2024-12-05 11:04:56.662555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.265 [2024-12-05 11:04:56.662586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.265 [2024-12-05 11:04:56.662617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:56.662648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.265 [2024-12-05 11:04:56.662679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.265 [2024-12-05 11:04:56.662710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:32.265 [2024-12-05 11:04:56.662728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.265 [2024-12-05 11:04:56.662740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.662758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.662771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.662789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.662802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.662821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.662833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.662851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.662871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.662890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.662902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.662920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.662933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.662954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.662967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.662985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.662998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.663063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.663095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.663126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.663158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.663333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.266 [2024-12-05 11:04:56.663491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.663522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.266 [2024-12-05 11:04:56.663554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:32.266 [2024-12-05 11:04:56.663573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.663586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.663604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.663617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.663635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.267 [2024-12-05 11:04:56.663648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.663666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.267 [2024-12-05 11:04:56.663679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.663703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.267 [2024-12-05 11:04:56.663716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.663734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.267 [2024-12-05 11:04:56.663747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.267 [2024-12-05 11:04:56.665251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:32.267 [2024-12-05 11:04:56.665437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:32.267 [2024-12-05 11:04:56.665461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:32.267 10401.38 IOPS, 40.63 MiB/s [2024-12-05T11:04:59.426Z] 10451.47 IOPS, 40.83 MiB/s [2024-12-05T11:04:59.426Z] Received shutdown signal, test time was about 30.970016 seconds 00:21:32.267 00:21:32.267 Latency(us) 00:21:32.267 [2024-12-05T11:04:59.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.267 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:32.267 Verification LBA range: start 0x0 length 0x4000 00:21:32.267 Nvme0n1 : 30.97 10491.67 40.98 0.00 0.00 12173.65 437.56 4015751.30 00:21:32.267 [2024-12-05T11:04:59.426Z] =================================================================================================================== 00:21:32.267 [2024-12-05T11:04:59.426Z] Total : 10491.67 40.98 0.00 0.00 12173.65 437.56 4015751.30 00:21:32.267 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:32.525 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:32.526 rmmod nvme_tcp 00:21:32.526 rmmod nvme_fabrics 00:21:32.526 rmmod nvme_keyring 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 76341 ']' 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 76341 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76341 ']' 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76341 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.526 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76341 00:21:32.783 killing process with pid 76341 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76341' 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76341 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76341 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@254 -- # local dev 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # remove_target_ns 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # delete_main_bridge 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:21:32.783 11:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:33.041 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # continue 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # continue 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@274 -- # iptr 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-save 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-restore 00:21:33.042 ************************************ 00:21:33.042 END TEST nvmf_host_multipath_status 00:21:33.042 ************************************ 00:21:33.042 00:21:33.042 real 0m37.026s 00:21:33.042 user 1m54.444s 00:21:33.042 sys 0m13.884s 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.042 ************************************ 00:21:33.042 START TEST nvmf_discovery_remove_ifc 00:21:33.042 ************************************ 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:33.042 * Looking for test storage... 00:21:33.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:33.042 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:33.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.301 --rc genhtml_branch_coverage=1 00:21:33.301 --rc genhtml_function_coverage=1 00:21:33.301 --rc genhtml_legend=1 00:21:33.301 --rc geninfo_all_blocks=1 00:21:33.301 --rc geninfo_unexecuted_blocks=1 00:21:33.301 00:21:33.301 ' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:33.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.301 --rc genhtml_branch_coverage=1 00:21:33.301 --rc genhtml_function_coverage=1 00:21:33.301 --rc genhtml_legend=1 00:21:33.301 --rc geninfo_all_blocks=1 00:21:33.301 --rc geninfo_unexecuted_blocks=1 00:21:33.301 00:21:33.301 ' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:33.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.301 --rc genhtml_branch_coverage=1 00:21:33.301 --rc genhtml_function_coverage=1 00:21:33.301 --rc genhtml_legend=1 00:21:33.301 --rc geninfo_all_blocks=1 00:21:33.301 --rc geninfo_unexecuted_blocks=1 00:21:33.301 00:21:33.301 ' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:33.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.301 --rc genhtml_branch_coverage=1 00:21:33.301 --rc genhtml_function_coverage=1 00:21:33.301 --rc genhtml_legend=1 00:21:33.301 --rc geninfo_all_blocks=1 00:21:33.301 --rc geninfo_unexecuted_blocks=1 00:21:33.301 00:21:33.301 ' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:33.301 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:33.301 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@280 -- # nvmf_veth_init 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@223 -- # create_target_ns 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # create_main_bridge 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@105 -- # delete_main_bridge 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up initiator0 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up target0 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0 up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up target0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns target0 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:21:33.302 10.0.0.1 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:33.302 10.0.0.2 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up initiator0 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:21:33.302 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up target0_br 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:33.561 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up initiator1 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up target1 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1 up 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up target1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns target1 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772163 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:21:33.562 10.0.0.3 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772164 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:21:33.562 10.0.0.4 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up initiator1 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:33.562 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up target1_br 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:33.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:21:33.563 00:21:33.563 --- 10.0.0.1 ping statistics --- 00:21:33.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.563 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target0 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:33.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.020 ms 00:21:33.563 00:21:33.563 --- 10.0.0.2 ping statistics --- 00:21:33.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.563 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.563 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:33.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:33.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:21:33.564 00:21:33.564 --- 10.0.0.3 ping statistics --- 00:21:33.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.564 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:21:33.564 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:33.564 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.103 ms 00:21:33.564 00:21:33.564 --- 10.0.0.4 ping statistics --- 00:21:33.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.564 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # return 0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target0 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:33.564 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target1 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target1 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:33.565 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=77210 00:21:33.823 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 77210 00:21:33.824 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77210 ']' 00:21:33.824 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:33.824 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.824 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.824 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.824 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.824 11:05:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:33.824 [2024-12-05 11:05:00.793810] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:21:33.824 [2024-12-05 11:05:00.794024] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.824 [2024-12-05 11:05:00.949472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.082 [2024-12-05 11:05:00.999099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.082 [2024-12-05 11:05:00.999151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.082 [2024-12-05 11:05:00.999161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.082 [2024-12-05 11:05:00.999169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.082 [2024-12-05 11:05:00.999176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.082 [2024-12-05 11:05:00.999481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.082 [2024-12-05 11:05:01.041193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.650 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:34.650 [2024-12-05 11:05:01.756505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.650 [2024-12-05 11:05:01.764628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:34.650 null0 00:21:34.650 [2024-12-05 11:05:01.796537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77242 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77242 /tmp/host.sock 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77242 ']' 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:21:34.910 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.910 11:05:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:34.910 [2024-12-05 11:05:01.872344] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:21:34.910 [2024-12-05 11:05:01.872405] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77242 ] 00:21:34.910 [2024-12-05 11:05:02.026924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.168 [2024-12-05 11:05:02.080845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:35.758 [2024-12-05 11:05:02.855243] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.758 11:05:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:37.138 [2024-12-05 11:05:03.905875] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:37.138 [2024-12-05 11:05:03.905908] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:37.138 [2024-12-05 11:05:03.905925] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:37.138 [2024-12-05 11:05:03.911907] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:37.138 [2024-12-05 11:05:03.966173] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:21:37.138 [2024-12-05 11:05:03.967173] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1665f00:1 started. 00:21:37.138 [2024-12-05 11:05:03.968813] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:37.138 [2024-12-05 11:05:03.968861] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:37.138 [2024-12-05 11:05:03.968883] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:37.138 [2024-12-05 11:05:03.968899] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:37.138 [2024-12-05 11:05:03.968924] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:37.138 [2024-12-05 11:05:03.974600] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1665f00 was disconnected and freed. delete nvme_qpair. 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:37.138 11:05:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:37.138 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.138 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:37.138 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev target0 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_ns_spdk ip link set target0 down 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:37.139 11:05:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:38.078 11:05:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:39.016 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:39.016 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.016 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:39.016 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.016 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:39.016 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:39.016 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:39.275 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.275 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:39.275 11:05:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:40.213 11:05:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:41.150 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:41.150 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:41.150 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.150 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:41.150 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:41.150 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:41.150 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:41.150 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.410 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:41.410 11:05:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:42.361 11:05:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:42.361 [2024-12-05 11:05:09.387895] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:42.361 [2024-12-05 11:05:09.387956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.361 [2024-12-05 11:05:09.387971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.361 [2024-12-05 11:05:09.387984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.361 [2024-12-05 11:05:09.387993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.361 [2024-12-05 11:05:09.388003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.361 [2024-12-05 11:05:09.388012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.361 [2024-12-05 11:05:09.388021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.361 [2024-12-05 11:05:09.388030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.361 [2024-12-05 11:05:09.388040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.361 [2024-12-05 11:05:09.388048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.361 [2024-12-05 11:05:09.388058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1641fc0 is same with the state(6) to be set 00:21:42.361 [2024-12-05 11:05:09.397874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641fc0 (9): Bad file descriptor 00:21:42.361 [2024-12-05 11:05:09.407873] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:42.361 [2024-12-05 11:05:09.407896] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:42.361 [2024-12-05 11:05:09.407902] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:42.361 [2024-12-05 11:05:09.407909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:42.361 [2024-12-05 11:05:09.407950] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:43.334 [2024-12-05 11:05:10.415384] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:43.334 [2024-12-05 11:05:10.415519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1641fc0 with addr=10.0.0.2, port=4420 00:21:43.334 [2024-12-05 11:05:10.415566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1641fc0 is same with the state(6) to be set 00:21:43.334 [2024-12-05 11:05:10.415648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1641fc0 (9): Bad file descriptor 00:21:43.334 [2024-12-05 11:05:10.416699] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:21:43.334 [2024-12-05 11:05:10.416779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:43.334 [2024-12-05 11:05:10.416809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:43.334 [2024-12-05 11:05:10.416840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:43.334 [2024-12-05 11:05:10.416868] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:43.334 [2024-12-05 11:05:10.416887] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:43.334 [2024-12-05 11:05:10.416905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:43.334 [2024-12-05 11:05:10.416935] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:43.334 [2024-12-05 11:05:10.416952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:43.334 11:05:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:44.276 [2024-12-05 11:05:11.415426] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:44.276 [2024-12-05 11:05:11.415463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:44.276 [2024-12-05 11:05:11.415487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:44.276 [2024-12-05 11:05:11.415498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:44.276 [2024-12-05 11:05:11.415508] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:21:44.276 [2024-12-05 11:05:11.415517] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:44.276 [2024-12-05 11:05:11.415524] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:44.276 [2024-12-05 11:05:11.415529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:44.276 [2024-12-05 11:05:11.415563] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:44.276 [2024-12-05 11:05:11.415602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.276 [2024-12-05 11:05:11.415614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.276 [2024-12-05 11:05:11.415628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.276 [2024-12-05 11:05:11.415637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.276 [2024-12-05 11:05:11.415647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.276 [2024-12-05 11:05:11.415655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.276 [2024-12-05 11:05:11.415665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.276 [2024-12-05 11:05:11.415675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.276 [2024-12-05 11:05:11.415685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.276 [2024-12-05 11:05:11.415693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.276 [2024-12-05 11:05:11.415703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:21:44.276 [2024-12-05 11:05:11.416224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cda20 (9): Bad file descriptor 00:21:44.276 [2024-12-05 11:05:11.417234] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:44.276 [2024-12-05 11:05:11.417252] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:44.536 11:05:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:45.477 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:45.478 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.478 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:45.478 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:45.478 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:45.478 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.478 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:45.478 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.738 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:45.738 11:05:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:46.361 [2024-12-05 11:05:13.425094] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:46.361 [2024-12-05 11:05:13.425131] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:46.361 [2024-12-05 11:05:13.425146] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:46.361 [2024-12-05 11:05:13.431127] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:46.361 [2024-12-05 11:05:13.485365] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:21:46.361 [2024-12-05 11:05:13.486061] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x166e1d0:1 started. 00:21:46.361 [2024-12-05 11:05:13.487169] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:46.362 [2024-12-05 11:05:13.487211] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:46.362 [2024-12-05 11:05:13.487231] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:46.362 [2024-12-05 11:05:13.487246] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:46.362 [2024-12-05 11:05:13.487255] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:46.362 [2024-12-05 11:05:13.493731] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x166e1d0 was disconnected and freed. delete nvme_qpair. 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77242 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77242 ']' 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77242 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77242 00:21:46.636 killing process with pid 77242 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77242' 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77242 00:21:46.636 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77242 00:21:46.899 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:46.899 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:46.899 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:21:46.899 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:46.899 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:21:46.899 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:46.899 11:05:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:46.899 rmmod nvme_tcp 00:21:46.899 rmmod nvme_fabrics 00:21:46.899 rmmod nvme_keyring 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 77210 ']' 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 77210 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77210 ']' 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77210 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.899 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77210 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:47.158 killing process with pid 77210 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77210' 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77210 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77210 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:21:47.158 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # continue 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # continue 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:21:47.417 00:21:47.417 real 0m14.327s 00:21:47.417 user 0m23.649s 00:21:47.417 sys 0m3.488s 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:47.417 ************************************ 00:21:47.417 END TEST nvmf_discovery_remove_ifc 00:21:47.417 ************************************ 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.417 ************************************ 00:21:47.417 START TEST nvmf_identify_kernel_target 00:21:47.417 ************************************ 00:21:47.417 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:47.677 * Looking for test storage... 00:21:47.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.677 --rc genhtml_branch_coverage=1 00:21:47.677 --rc genhtml_function_coverage=1 00:21:47.677 --rc genhtml_legend=1 00:21:47.677 --rc geninfo_all_blocks=1 00:21:47.677 --rc geninfo_unexecuted_blocks=1 00:21:47.677 00:21:47.677 ' 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.677 --rc genhtml_branch_coverage=1 00:21:47.677 --rc genhtml_function_coverage=1 00:21:47.677 --rc genhtml_legend=1 00:21:47.677 --rc geninfo_all_blocks=1 00:21:47.677 --rc geninfo_unexecuted_blocks=1 00:21:47.677 00:21:47.677 ' 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.677 --rc genhtml_branch_coverage=1 00:21:47.677 --rc genhtml_function_coverage=1 00:21:47.677 --rc genhtml_legend=1 00:21:47.677 --rc geninfo_all_blocks=1 00:21:47.677 --rc geninfo_unexecuted_blocks=1 00:21:47.677 00:21:47.677 ' 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.677 --rc genhtml_branch_coverage=1 00:21:47.677 --rc genhtml_function_coverage=1 00:21:47.677 --rc genhtml_legend=1 00:21:47.677 --rc geninfo_all_blocks=1 00:21:47.677 --rc geninfo_unexecuted_blocks=1 00:21:47.677 00:21:47.677 ' 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.677 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:47.678 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@223 -- # create_target_ns 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:47.678 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # return 0 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up target0 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:47.679 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:21:47.938 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:21:47.938 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:47.938 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:47.938 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:47.938 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:21:47.938 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:47.938 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:47.938 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:21:47.939 10.0.0.1 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:47.939 10.0.0.2 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:47.939 11:05:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up target1 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:21:47.939 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772163 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:21:47.940 10.0.0.3 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772164 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:21:47.940 10.0.0.4 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:47.940 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:48.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:21:48.198 00:21:48.198 --- 10.0.0.1 ping statistics --- 00:21:48.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.198 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target0 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:48.198 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:48.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:21:48.199 00:21:48.199 --- 10.0.0.2 ping statistics --- 00:21:48.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.199 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:48.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:48.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:21:48.199 00:21:48.199 --- 10.0.0.3 ping statistics --- 00:21:48.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.199 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:21:48.199 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:48.199 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.129 ms 00:21:48.199 00:21:48.199 --- 10.0.0.4 ping statistics --- 00:21:48.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.199 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # return 0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:48.199 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target0 00:21:48.200 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target0 00:21:48.200 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:48.200 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:48.458 11:05:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:49.025 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:49.025 Waiting for block devices as requested 00:21:49.025 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:49.282 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:49.282 No valid GPT data, bailing 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:49.282 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:49.540 No valid GPT data, bailing 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:49.540 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:49.541 No valid GPT data, bailing 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:49.541 No valid GPT data, bailing 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:21:49.541 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:49.799 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -a 10.0.0.1 -t tcp -s 4420 00:21:49.799 00:21:49.799 Discovery Log Number of Records 2, Generation counter 2 00:21:49.799 =====Discovery Log Entry 0====== 00:21:49.799 trtype: tcp 00:21:49.799 adrfam: ipv4 00:21:49.799 subtype: current discovery subsystem 00:21:49.799 treq: not specified, sq flow control disable supported 00:21:49.799 portid: 1 00:21:49.799 trsvcid: 4420 00:21:49.799 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:49.799 traddr: 10.0.0.1 00:21:49.799 eflags: none 00:21:49.799 sectype: none 00:21:49.799 =====Discovery Log Entry 1====== 00:21:49.799 trtype: tcp 00:21:49.799 adrfam: ipv4 00:21:49.799 subtype: nvme subsystem 00:21:49.799 treq: not specified, sq flow control disable supported 00:21:49.799 portid: 1 00:21:49.799 trsvcid: 4420 00:21:49.799 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:49.799 traddr: 10.0.0.1 00:21:49.799 eflags: none 00:21:49.799 sectype: none 00:21:49.799 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:49.799 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:49.799 ===================================================== 00:21:49.799 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:49.799 ===================================================== 00:21:49.799 Controller Capabilities/Features 00:21:49.799 ================================ 00:21:49.799 Vendor ID: 0000 00:21:49.800 Subsystem Vendor ID: 0000 00:21:49.800 Serial Number: a06a14cab1a7db0a4519 00:21:49.800 Model Number: Linux 00:21:49.800 Firmware Version: 6.8.9-20 00:21:49.800 Recommended Arb Burst: 0 00:21:49.800 IEEE OUI Identifier: 00 00 00 00:21:49.800 Multi-path I/O 00:21:49.800 May have multiple subsystem ports: No 00:21:49.800 May have multiple controllers: No 00:21:49.800 Associated with SR-IOV VF: No 00:21:49.800 Max Data Transfer Size: Unlimited 00:21:49.800 Max Number of Namespaces: 0 00:21:49.800 Max Number of I/O Queues: 1024 00:21:49.800 NVMe Specification Version (VS): 1.3 00:21:49.800 NVMe Specification Version (Identify): 1.3 00:21:49.800 Maximum Queue Entries: 1024 00:21:49.800 Contiguous Queues Required: No 00:21:49.800 Arbitration Mechanisms Supported 00:21:49.800 Weighted Round Robin: Not Supported 00:21:49.800 Vendor Specific: Not Supported 00:21:49.800 Reset Timeout: 7500 ms 00:21:49.800 Doorbell Stride: 4 bytes 00:21:49.800 NVM Subsystem Reset: Not Supported 00:21:49.800 Command Sets Supported 00:21:49.800 NVM Command Set: Supported 00:21:49.800 Boot Partition: Not Supported 00:21:49.800 Memory Page Size Minimum: 4096 bytes 00:21:49.800 Memory Page Size Maximum: 4096 bytes 00:21:49.800 Persistent Memory Region: Not Supported 00:21:49.800 Optional Asynchronous Events Supported 00:21:49.800 Namespace Attribute Notices: Not Supported 00:21:49.800 Firmware Activation Notices: Not Supported 00:21:49.800 ANA Change Notices: Not Supported 00:21:49.800 PLE Aggregate Log Change Notices: Not Supported 00:21:49.800 LBA Status Info Alert Notices: Not Supported 00:21:49.800 EGE Aggregate Log Change Notices: Not Supported 00:21:49.800 Normal NVM Subsystem Shutdown event: Not Supported 00:21:49.800 Zone Descriptor Change Notices: Not Supported 00:21:49.800 Discovery Log Change Notices: Supported 00:21:49.800 Controller Attributes 00:21:49.800 128-bit Host Identifier: Not Supported 00:21:49.800 Non-Operational Permissive Mode: Not Supported 00:21:49.800 NVM Sets: Not Supported 00:21:49.800 Read Recovery Levels: Not Supported 00:21:49.800 Endurance Groups: Not Supported 00:21:49.800 Predictable Latency Mode: Not Supported 00:21:49.800 Traffic Based Keep ALive: Not Supported 00:21:49.800 Namespace Granularity: Not Supported 00:21:49.800 SQ Associations: Not Supported 00:21:49.800 UUID List: Not Supported 00:21:49.800 Multi-Domain Subsystem: Not Supported 00:21:49.800 Fixed Capacity Management: Not Supported 00:21:49.800 Variable Capacity Management: Not Supported 00:21:49.800 Delete Endurance Group: Not Supported 00:21:49.800 Delete NVM Set: Not Supported 00:21:49.800 Extended LBA Formats Supported: Not Supported 00:21:49.800 Flexible Data Placement Supported: Not Supported 00:21:49.800 00:21:49.800 Controller Memory Buffer Support 00:21:49.800 ================================ 00:21:49.800 Supported: No 00:21:49.800 00:21:49.800 Persistent Memory Region Support 00:21:49.800 ================================ 00:21:49.800 Supported: No 00:21:49.800 00:21:49.800 Admin Command Set Attributes 00:21:49.800 ============================ 00:21:49.800 Security Send/Receive: Not Supported 00:21:49.800 Format NVM: Not Supported 00:21:49.800 Firmware Activate/Download: Not Supported 00:21:49.800 Namespace Management: Not Supported 00:21:49.800 Device Self-Test: Not Supported 00:21:49.800 Directives: Not Supported 00:21:49.800 NVMe-MI: Not Supported 00:21:49.800 Virtualization Management: Not Supported 00:21:49.800 Doorbell Buffer Config: Not Supported 00:21:49.800 Get LBA Status Capability: Not Supported 00:21:49.800 Command & Feature Lockdown Capability: Not Supported 00:21:49.800 Abort Command Limit: 1 00:21:49.800 Async Event Request Limit: 1 00:21:49.800 Number of Firmware Slots: N/A 00:21:49.800 Firmware Slot 1 Read-Only: N/A 00:21:49.800 Firmware Activation Without Reset: N/A 00:21:49.800 Multiple Update Detection Support: N/A 00:21:49.800 Firmware Update Granularity: No Information Provided 00:21:49.800 Per-Namespace SMART Log: No 00:21:49.800 Asymmetric Namespace Access Log Page: Not Supported 00:21:49.800 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:49.800 Command Effects Log Page: Not Supported 00:21:49.800 Get Log Page Extended Data: Supported 00:21:49.800 Telemetry Log Pages: Not Supported 00:21:49.800 Persistent Event Log Pages: Not Supported 00:21:49.800 Supported Log Pages Log Page: May Support 00:21:49.800 Commands Supported & Effects Log Page: Not Supported 00:21:49.800 Feature Identifiers & Effects Log Page:May Support 00:21:49.800 NVMe-MI Commands & Effects Log Page: May Support 00:21:49.800 Data Area 4 for Telemetry Log: Not Supported 00:21:49.800 Error Log Page Entries Supported: 1 00:21:49.800 Keep Alive: Not Supported 00:21:49.800 00:21:49.800 NVM Command Set Attributes 00:21:49.800 ========================== 00:21:49.800 Submission Queue Entry Size 00:21:49.800 Max: 1 00:21:49.800 Min: 1 00:21:49.800 Completion Queue Entry Size 00:21:49.800 Max: 1 00:21:49.800 Min: 1 00:21:49.800 Number of Namespaces: 0 00:21:49.800 Compare Command: Not Supported 00:21:49.800 Write Uncorrectable Command: Not Supported 00:21:49.800 Dataset Management Command: Not Supported 00:21:49.800 Write Zeroes Command: Not Supported 00:21:49.800 Set Features Save Field: Not Supported 00:21:49.800 Reservations: Not Supported 00:21:49.800 Timestamp: Not Supported 00:21:49.800 Copy: Not Supported 00:21:49.800 Volatile Write Cache: Not Present 00:21:49.800 Atomic Write Unit (Normal): 1 00:21:49.800 Atomic Write Unit (PFail): 1 00:21:49.800 Atomic Compare & Write Unit: 1 00:21:49.800 Fused Compare & Write: Not Supported 00:21:49.800 Scatter-Gather List 00:21:49.800 SGL Command Set: Supported 00:21:49.800 SGL Keyed: Not Supported 00:21:49.800 SGL Bit Bucket Descriptor: Not Supported 00:21:49.800 SGL Metadata Pointer: Not Supported 00:21:49.800 Oversized SGL: Not Supported 00:21:49.800 SGL Metadata Address: Not Supported 00:21:49.800 SGL Offset: Supported 00:21:49.800 Transport SGL Data Block: Not Supported 00:21:49.800 Replay Protected Memory Block: Not Supported 00:21:49.800 00:21:49.800 Firmware Slot Information 00:21:49.800 ========================= 00:21:49.800 Active slot: 0 00:21:49.800 00:21:49.800 00:21:49.800 Error Log 00:21:49.800 ========= 00:21:49.800 00:21:49.800 Active Namespaces 00:21:49.800 ================= 00:21:49.800 Discovery Log Page 00:21:49.800 ================== 00:21:49.800 Generation Counter: 2 00:21:49.800 Number of Records: 2 00:21:49.800 Record Format: 0 00:21:49.800 00:21:49.800 Discovery Log Entry 0 00:21:49.800 ---------------------- 00:21:49.800 Transport Type: 3 (TCP) 00:21:49.800 Address Family: 1 (IPv4) 00:21:49.800 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:49.800 Entry Flags: 00:21:49.800 Duplicate Returned Information: 0 00:21:49.800 Explicit Persistent Connection Support for Discovery: 0 00:21:49.800 Transport Requirements: 00:21:49.800 Secure Channel: Not Specified 00:21:49.800 Port ID: 1 (0x0001) 00:21:49.800 Controller ID: 65535 (0xffff) 00:21:49.800 Admin Max SQ Size: 32 00:21:49.800 Transport Service Identifier: 4420 00:21:49.800 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:49.800 Transport Address: 10.0.0.1 00:21:49.800 Discovery Log Entry 1 00:21:49.800 ---------------------- 00:21:49.800 Transport Type: 3 (TCP) 00:21:49.800 Address Family: 1 (IPv4) 00:21:49.800 Subsystem Type: 2 (NVM Subsystem) 00:21:49.800 Entry Flags: 00:21:49.800 Duplicate Returned Information: 0 00:21:49.800 Explicit Persistent Connection Support for Discovery: 0 00:21:49.800 Transport Requirements: 00:21:49.800 Secure Channel: Not Specified 00:21:49.801 Port ID: 1 (0x0001) 00:21:49.801 Controller ID: 65535 (0xffff) 00:21:49.801 Admin Max SQ Size: 32 00:21:49.801 Transport Service Identifier: 4420 00:21:49.801 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:49.801 Transport Address: 10.0.0.1 00:21:49.801 11:05:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:50.061 get_feature(0x01) failed 00:21:50.061 get_feature(0x02) failed 00:21:50.061 get_feature(0x04) failed 00:21:50.061 ===================================================== 00:21:50.061 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:50.061 ===================================================== 00:21:50.061 Controller Capabilities/Features 00:21:50.061 ================================ 00:21:50.061 Vendor ID: 0000 00:21:50.061 Subsystem Vendor ID: 0000 00:21:50.061 Serial Number: 8523013118e635cf320e 00:21:50.061 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:50.061 Firmware Version: 6.8.9-20 00:21:50.061 Recommended Arb Burst: 6 00:21:50.061 IEEE OUI Identifier: 00 00 00 00:21:50.061 Multi-path I/O 00:21:50.061 May have multiple subsystem ports: Yes 00:21:50.061 May have multiple controllers: Yes 00:21:50.061 Associated with SR-IOV VF: No 00:21:50.061 Max Data Transfer Size: Unlimited 00:21:50.061 Max Number of Namespaces: 1024 00:21:50.061 Max Number of I/O Queues: 128 00:21:50.061 NVMe Specification Version (VS): 1.3 00:21:50.061 NVMe Specification Version (Identify): 1.3 00:21:50.061 Maximum Queue Entries: 1024 00:21:50.061 Contiguous Queues Required: No 00:21:50.061 Arbitration Mechanisms Supported 00:21:50.061 Weighted Round Robin: Not Supported 00:21:50.061 Vendor Specific: Not Supported 00:21:50.061 Reset Timeout: 7500 ms 00:21:50.061 Doorbell Stride: 4 bytes 00:21:50.061 NVM Subsystem Reset: Not Supported 00:21:50.061 Command Sets Supported 00:21:50.061 NVM Command Set: Supported 00:21:50.061 Boot Partition: Not Supported 00:21:50.061 Memory Page Size Minimum: 4096 bytes 00:21:50.061 Memory Page Size Maximum: 4096 bytes 00:21:50.061 Persistent Memory Region: Not Supported 00:21:50.061 Optional Asynchronous Events Supported 00:21:50.061 Namespace Attribute Notices: Supported 00:21:50.061 Firmware Activation Notices: Not Supported 00:21:50.061 ANA Change Notices: Supported 00:21:50.061 PLE Aggregate Log Change Notices: Not Supported 00:21:50.061 LBA Status Info Alert Notices: Not Supported 00:21:50.061 EGE Aggregate Log Change Notices: Not Supported 00:21:50.061 Normal NVM Subsystem Shutdown event: Not Supported 00:21:50.061 Zone Descriptor Change Notices: Not Supported 00:21:50.061 Discovery Log Change Notices: Not Supported 00:21:50.061 Controller Attributes 00:21:50.061 128-bit Host Identifier: Supported 00:21:50.061 Non-Operational Permissive Mode: Not Supported 00:21:50.061 NVM Sets: Not Supported 00:21:50.061 Read Recovery Levels: Not Supported 00:21:50.061 Endurance Groups: Not Supported 00:21:50.061 Predictable Latency Mode: Not Supported 00:21:50.061 Traffic Based Keep ALive: Supported 00:21:50.061 Namespace Granularity: Not Supported 00:21:50.061 SQ Associations: Not Supported 00:21:50.061 UUID List: Not Supported 00:21:50.061 Multi-Domain Subsystem: Not Supported 00:21:50.061 Fixed Capacity Management: Not Supported 00:21:50.061 Variable Capacity Management: Not Supported 00:21:50.061 Delete Endurance Group: Not Supported 00:21:50.061 Delete NVM Set: Not Supported 00:21:50.061 Extended LBA Formats Supported: Not Supported 00:21:50.061 Flexible Data Placement Supported: Not Supported 00:21:50.061 00:21:50.061 Controller Memory Buffer Support 00:21:50.061 ================================ 00:21:50.061 Supported: No 00:21:50.061 00:21:50.061 Persistent Memory Region Support 00:21:50.061 ================================ 00:21:50.061 Supported: No 00:21:50.061 00:21:50.061 Admin Command Set Attributes 00:21:50.061 ============================ 00:21:50.061 Security Send/Receive: Not Supported 00:21:50.061 Format NVM: Not Supported 00:21:50.061 Firmware Activate/Download: Not Supported 00:21:50.061 Namespace Management: Not Supported 00:21:50.061 Device Self-Test: Not Supported 00:21:50.061 Directives: Not Supported 00:21:50.061 NVMe-MI: Not Supported 00:21:50.061 Virtualization Management: Not Supported 00:21:50.061 Doorbell Buffer Config: Not Supported 00:21:50.061 Get LBA Status Capability: Not Supported 00:21:50.061 Command & Feature Lockdown Capability: Not Supported 00:21:50.061 Abort Command Limit: 4 00:21:50.061 Async Event Request Limit: 4 00:21:50.061 Number of Firmware Slots: N/A 00:21:50.061 Firmware Slot 1 Read-Only: N/A 00:21:50.061 Firmware Activation Without Reset: N/A 00:21:50.061 Multiple Update Detection Support: N/A 00:21:50.061 Firmware Update Granularity: No Information Provided 00:21:50.061 Per-Namespace SMART Log: Yes 00:21:50.061 Asymmetric Namespace Access Log Page: Supported 00:21:50.061 ANA Transition Time : 10 sec 00:21:50.061 00:21:50.061 Asymmetric Namespace Access Capabilities 00:21:50.061 ANA Optimized State : Supported 00:21:50.061 ANA Non-Optimized State : Supported 00:21:50.061 ANA Inaccessible State : Supported 00:21:50.061 ANA Persistent Loss State : Supported 00:21:50.061 ANA Change State : Supported 00:21:50.061 ANAGRPID is not changed : No 00:21:50.061 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:50.061 00:21:50.061 ANA Group Identifier Maximum : 128 00:21:50.061 Number of ANA Group Identifiers : 128 00:21:50.061 Max Number of Allowed Namespaces : 1024 00:21:50.061 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:50.061 Command Effects Log Page: Supported 00:21:50.061 Get Log Page Extended Data: Supported 00:21:50.061 Telemetry Log Pages: Not Supported 00:21:50.061 Persistent Event Log Pages: Not Supported 00:21:50.061 Supported Log Pages Log Page: May Support 00:21:50.061 Commands Supported & Effects Log Page: Not Supported 00:21:50.061 Feature Identifiers & Effects Log Page:May Support 00:21:50.061 NVMe-MI Commands & Effects Log Page: May Support 00:21:50.061 Data Area 4 for Telemetry Log: Not Supported 00:21:50.061 Error Log Page Entries Supported: 128 00:21:50.061 Keep Alive: Supported 00:21:50.061 Keep Alive Granularity: 1000 ms 00:21:50.061 00:21:50.061 NVM Command Set Attributes 00:21:50.061 ========================== 00:21:50.061 Submission Queue Entry Size 00:21:50.061 Max: 64 00:21:50.061 Min: 64 00:21:50.061 Completion Queue Entry Size 00:21:50.061 Max: 16 00:21:50.061 Min: 16 00:21:50.061 Number of Namespaces: 1024 00:21:50.061 Compare Command: Not Supported 00:21:50.061 Write Uncorrectable Command: Not Supported 00:21:50.061 Dataset Management Command: Supported 00:21:50.061 Write Zeroes Command: Supported 00:21:50.061 Set Features Save Field: Not Supported 00:21:50.061 Reservations: Not Supported 00:21:50.061 Timestamp: Not Supported 00:21:50.061 Copy: Not Supported 00:21:50.061 Volatile Write Cache: Present 00:21:50.061 Atomic Write Unit (Normal): 1 00:21:50.061 Atomic Write Unit (PFail): 1 00:21:50.061 Atomic Compare & Write Unit: 1 00:21:50.061 Fused Compare & Write: Not Supported 00:21:50.061 Scatter-Gather List 00:21:50.061 SGL Command Set: Supported 00:21:50.061 SGL Keyed: Not Supported 00:21:50.061 SGL Bit Bucket Descriptor: Not Supported 00:21:50.061 SGL Metadata Pointer: Not Supported 00:21:50.061 Oversized SGL: Not Supported 00:21:50.061 SGL Metadata Address: Not Supported 00:21:50.061 SGL Offset: Supported 00:21:50.061 Transport SGL Data Block: Not Supported 00:21:50.061 Replay Protected Memory Block: Not Supported 00:21:50.061 00:21:50.061 Firmware Slot Information 00:21:50.061 ========================= 00:21:50.061 Active slot: 0 00:21:50.061 00:21:50.061 Asymmetric Namespace Access 00:21:50.061 =========================== 00:21:50.061 Change Count : 0 00:21:50.061 Number of ANA Group Descriptors : 1 00:21:50.061 ANA Group Descriptor : 0 00:21:50.061 ANA Group ID : 1 00:21:50.062 Number of NSID Values : 1 00:21:50.062 Change Count : 0 00:21:50.062 ANA State : 1 00:21:50.062 Namespace Identifier : 1 00:21:50.062 00:21:50.062 Commands Supported and Effects 00:21:50.062 ============================== 00:21:50.062 Admin Commands 00:21:50.062 -------------- 00:21:50.062 Get Log Page (02h): Supported 00:21:50.062 Identify (06h): Supported 00:21:50.062 Abort (08h): Supported 00:21:50.062 Set Features (09h): Supported 00:21:50.062 Get Features (0Ah): Supported 00:21:50.062 Asynchronous Event Request (0Ch): Supported 00:21:50.062 Keep Alive (18h): Supported 00:21:50.062 I/O Commands 00:21:50.062 ------------ 00:21:50.062 Flush (00h): Supported 00:21:50.062 Write (01h): Supported LBA-Change 00:21:50.062 Read (02h): Supported 00:21:50.062 Write Zeroes (08h): Supported LBA-Change 00:21:50.062 Dataset Management (09h): Supported 00:21:50.062 00:21:50.062 Error Log 00:21:50.062 ========= 00:21:50.062 Entry: 0 00:21:50.062 Error Count: 0x3 00:21:50.062 Submission Queue Id: 0x0 00:21:50.062 Command Id: 0x5 00:21:50.062 Phase Bit: 0 00:21:50.062 Status Code: 0x2 00:21:50.062 Status Code Type: 0x0 00:21:50.062 Do Not Retry: 1 00:21:50.062 Error Location: 0x28 00:21:50.062 LBA: 0x0 00:21:50.062 Namespace: 0x0 00:21:50.062 Vendor Log Page: 0x0 00:21:50.062 ----------- 00:21:50.062 Entry: 1 00:21:50.062 Error Count: 0x2 00:21:50.062 Submission Queue Id: 0x0 00:21:50.062 Command Id: 0x5 00:21:50.062 Phase Bit: 0 00:21:50.062 Status Code: 0x2 00:21:50.062 Status Code Type: 0x0 00:21:50.062 Do Not Retry: 1 00:21:50.062 Error Location: 0x28 00:21:50.062 LBA: 0x0 00:21:50.062 Namespace: 0x0 00:21:50.062 Vendor Log Page: 0x0 00:21:50.062 ----------- 00:21:50.062 Entry: 2 00:21:50.062 Error Count: 0x1 00:21:50.062 Submission Queue Id: 0x0 00:21:50.062 Command Id: 0x4 00:21:50.062 Phase Bit: 0 00:21:50.062 Status Code: 0x2 00:21:50.062 Status Code Type: 0x0 00:21:50.062 Do Not Retry: 1 00:21:50.062 Error Location: 0x28 00:21:50.062 LBA: 0x0 00:21:50.062 Namespace: 0x0 00:21:50.062 Vendor Log Page: 0x0 00:21:50.062 00:21:50.062 Number of Queues 00:21:50.062 ================ 00:21:50.062 Number of I/O Submission Queues: 128 00:21:50.062 Number of I/O Completion Queues: 128 00:21:50.062 00:21:50.062 ZNS Specific Controller Data 00:21:50.062 ============================ 00:21:50.062 Zone Append Size Limit: 0 00:21:50.062 00:21:50.062 00:21:50.062 Active Namespaces 00:21:50.062 ================= 00:21:50.062 get_feature(0x05) failed 00:21:50.062 Namespace ID:1 00:21:50.062 Command Set Identifier: NVM (00h) 00:21:50.062 Deallocate: Supported 00:21:50.062 Deallocated/Unwritten Error: Not Supported 00:21:50.062 Deallocated Read Value: Unknown 00:21:50.062 Deallocate in Write Zeroes: Not Supported 00:21:50.062 Deallocated Guard Field: 0xFFFF 00:21:50.062 Flush: Supported 00:21:50.062 Reservation: Not Supported 00:21:50.062 Namespace Sharing Capabilities: Multiple Controllers 00:21:50.062 Size (in LBAs): 1310720 (5GiB) 00:21:50.062 Capacity (in LBAs): 1310720 (5GiB) 00:21:50.062 Utilization (in LBAs): 1310720 (5GiB) 00:21:50.062 UUID: 7fae453f-6622-41b1-8cbe-cdc257d91a21 00:21:50.062 Thin Provisioning: Not Supported 00:21:50.062 Per-NS Atomic Units: Yes 00:21:50.062 Atomic Boundary Size (Normal): 0 00:21:50.062 Atomic Boundary Size (PFail): 0 00:21:50.062 Atomic Boundary Offset: 0 00:21:50.062 NGUID/EUI64 Never Reused: No 00:21:50.062 ANA group ID: 1 00:21:50.062 Namespace Write Protected: No 00:21:50.062 Number of LBA Formats: 1 00:21:50.062 Current LBA Format: LBA Format #00 00:21:50.062 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:50.062 00:21:50.062 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:50.062 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:50.062 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:21:50.062 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:50.062 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:21:50.062 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:50.062 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:50.062 rmmod nvme_tcp 00:21:50.062 rmmod nvme_fabrics 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@254 -- # local dev 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # continue 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # continue 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@274 -- # iptr 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-save 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-restore 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:21:50.322 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:21:50.584 11:05:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:51.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:51.521 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:51.521 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:51.521 00:21:51.521 real 0m4.089s 00:21:51.521 user 0m1.495s 00:21:51.521 sys 0m2.155s 00:21:51.521 11:05:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.521 11:05:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.521 ************************************ 00:21:51.521 END TEST nvmf_identify_kernel_target 00:21:51.521 ************************************ 00:21:51.521 11:05:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:51.521 11:05:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:51.521 11:05:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.521 11:05:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.521 ************************************ 00:21:51.521 START TEST nvmf_auth_host 00:21:51.521 ************************************ 00:21:51.521 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:51.781 * Looking for test storage... 00:21:51.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:51.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.781 --rc genhtml_branch_coverage=1 00:21:51.781 --rc genhtml_function_coverage=1 00:21:51.781 --rc genhtml_legend=1 00:21:51.781 --rc geninfo_all_blocks=1 00:21:51.781 --rc geninfo_unexecuted_blocks=1 00:21:51.781 00:21:51.781 ' 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:51.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.781 --rc genhtml_branch_coverage=1 00:21:51.781 --rc genhtml_function_coverage=1 00:21:51.781 --rc genhtml_legend=1 00:21:51.781 --rc geninfo_all_blocks=1 00:21:51.781 --rc geninfo_unexecuted_blocks=1 00:21:51.781 00:21:51.781 ' 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:51.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.781 --rc genhtml_branch_coverage=1 00:21:51.781 --rc genhtml_function_coverage=1 00:21:51.781 --rc genhtml_legend=1 00:21:51.781 --rc geninfo_all_blocks=1 00:21:51.781 --rc geninfo_unexecuted_blocks=1 00:21:51.781 00:21:51.781 ' 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:51.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.781 --rc genhtml_branch_coverage=1 00:21:51.781 --rc genhtml_function_coverage=1 00:21:51.781 --rc genhtml_legend=1 00:21:51.781 --rc geninfo_all_blocks=1 00:21:51.781 --rc geninfo_unexecuted_blocks=1 00:21:51.781 00:21:51.781 ' 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.781 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:51.782 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:51.782 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@280 -- # nvmf_veth_init 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@223 -- # create_target_ns 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:52.042 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # create_main_bridge 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@105 -- # delete_main_bridge 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # return 0 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:21:52.043 11:05:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up initiator0 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up target0 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0 up 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up target0_br 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns target0 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:21:52.043 10.0.0.1 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:52.043 10.0.0.2 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up initiator0 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.043 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up target0_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up initiator1 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up target1 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1 up 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up target1_br 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:52.044 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns target1 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772163 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:21:52.306 10.0.0.3 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772164 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:52.306 10.0.0.4 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up initiator1 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up target1_br 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:52.306 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:52.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:21:52.307 00:21:52.307 --- 10.0.0.1 ping statistics --- 00:21:52.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.307 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target0 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target0 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:52.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:21:52.307 00:21:52.307 --- 10.0.0.2 ping statistics --- 00:21:52.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.307 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:52.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:52.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:21:52.307 00:21:52.307 --- 10.0.0.3 ping statistics --- 00:21:52.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.307 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:21:52.307 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:52.307 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:21:52.307 00:21:52.307 --- 10.0.0.4 ping statistics --- 00:21:52.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.307 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # return 0 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:21:52.307 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:52.308 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:52.567 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:52.567 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:52.567 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:52.567 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.567 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target0 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target0 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target1 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=78253 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:52.568 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 78253 00:21:52.569 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78253 ']' 00:21:52.569 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.569 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.569 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.569 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.569 11:05:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=693326b748a44689160df694788f77b4 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Elg 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 693326b748a44689160df694788f77b4 0 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 693326b748a44689160df694788f77b4 0 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=693326b748a44689160df694788f77b4 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Elg 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Elg 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Elg 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:53.508 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=307310b8b0d2df342fac5d774158fba75b659caad6e32ea7656ca90e4f5fb916 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.IHU 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 307310b8b0d2df342fac5d774158fba75b659caad6e32ea7656ca90e4f5fb916 3 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 307310b8b0d2df342fac5d774158fba75b659caad6e32ea7656ca90e4f5fb916 3 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=307310b8b0d2df342fac5d774158fba75b659caad6e32ea7656ca90e4f5fb916 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.IHU 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.IHU 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.IHU 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=0e586c1ab139a87a0e117606cee7ec652d433bbcfbf9d85b 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.NZd 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 0e586c1ab139a87a0e117606cee7ec652d433bbcfbf9d85b 0 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 0e586c1ab139a87a0e117606cee7ec652d433bbcfbf9d85b 0 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=0e586c1ab139a87a0e117606cee7ec652d433bbcfbf9d85b 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.NZd 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.NZd 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.NZd 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=a0d4f29139fc6dd67e7bff249b55b8d92d69fb8d3cc1f21d 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.pUb 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key a0d4f29139fc6dd67e7bff249b55b8d92d69fb8d3cc1f21d 2 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 a0d4f29139fc6dd67e7bff249b55b8d92d69fb8d3cc1f21d 2 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=a0d4f29139fc6dd67e7bff249b55b8d92d69fb8d3cc1f21d 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.pUb 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.pUb 00:21:53.768 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.pUb 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=5c2c4135b6138e5d087dfdb6e6438d22 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.tjx 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 5c2c4135b6138e5d087dfdb6e6438d22 1 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 5c2c4135b6138e5d087dfdb6e6438d22 1 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=5c2c4135b6138e5d087dfdb6e6438d22 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:21:53.769 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:54.028 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.tjx 00:21:54.028 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.tjx 00:21:54.028 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.tjx 00:21:54.028 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=06331bc042e22a9e3b1828b33d3eea56 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.I7U 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 06331bc042e22a9e3b1828b33d3eea56 1 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 06331bc042e22a9e3b1828b33d3eea56 1 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:54.029 11:05:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=06331bc042e22a9e3b1828b33d3eea56 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.I7U 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.I7U 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.I7U 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=304e1bc4901e2adefc32b5882517aed6d858152bd5fd5539 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.x3a 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 304e1bc4901e2adefc32b5882517aed6d858152bd5fd5539 2 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 304e1bc4901e2adefc32b5882517aed6d858152bd5fd5539 2 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=304e1bc4901e2adefc32b5882517aed6d858152bd5fd5539 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.x3a 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.x3a 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.x3a 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=bfa5b2f9ffaed69592f00a7c088ebf8a 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.N5m 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key bfa5b2f9ffaed69592f00a7c088ebf8a 0 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 bfa5b2f9ffaed69592f00a7c088ebf8a 0 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=bfa5b2f9ffaed69592f00a7c088ebf8a 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:21:54.029 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.N5m 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.N5m 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.N5m 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=6104e7eac326cada9fcd9f8cd59dddece5ecdca152304f8d603a927385cc0fa0 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.Ogh 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 6104e7eac326cada9fcd9f8cd59dddece5ecdca152304f8d603a927385cc0fa0 3 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 6104e7eac326cada9fcd9f8cd59dddece5ecdca152304f8d603a927385cc0fa0 3 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=6104e7eac326cada9fcd9f8cd59dddece5ecdca152304f8d603a927385cc0fa0 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.Ogh 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.Ogh 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Ogh 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78253 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78253 ']' 00:21:54.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.289 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Elg 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.IHU ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.IHU 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.NZd 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.pUb ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pUb 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.tjx 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.I7U ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I7U 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.x3a 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.549 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.N5m ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.N5m 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Ogh 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:54.550 11:05:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:55.148 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:55.148 Waiting for block devices as requested 00:21:55.148 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:55.454 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:56.393 No valid GPT data, bailing 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:56.393 No valid GPT data, bailing 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:56.393 No valid GPT data, bailing 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:56.393 No valid GPT data, bailing 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:21:56.393 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:56.394 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -a 10.0.0.1 -t tcp -s 4420 00:21:56.653 00:21:56.653 Discovery Log Number of Records 2, Generation counter 2 00:21:56.653 =====Discovery Log Entry 0====== 00:21:56.653 trtype: tcp 00:21:56.653 adrfam: ipv4 00:21:56.653 subtype: current discovery subsystem 00:21:56.653 treq: not specified, sq flow control disable supported 00:21:56.653 portid: 1 00:21:56.653 trsvcid: 4420 00:21:56.653 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:56.653 traddr: 10.0.0.1 00:21:56.653 eflags: none 00:21:56.653 sectype: none 00:21:56.653 =====Discovery Log Entry 1====== 00:21:56.653 trtype: tcp 00:21:56.653 adrfam: ipv4 00:21:56.653 subtype: nvme subsystem 00:21:56.653 treq: not specified, sq flow control disable supported 00:21:56.653 portid: 1 00:21:56.653 trsvcid: 4420 00:21:56.653 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:56.653 traddr: 10.0.0.1 00:21:56.653 eflags: none 00:21:56.653 sectype: none 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:56.653 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.654 nvme0n1 00:21:56.654 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.913 nvme0n1 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.913 11:05:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.913 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.173 nvme0n1 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:57.173 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.174 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.435 nvme0n1 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.435 nvme0n1 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.435 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.436 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.696 nvme0n1 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:57.696 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:57.955 11:05:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:57.955 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:57.955 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:57.956 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:57.956 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.956 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.956 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.214 nvme0n1 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.214 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.215 nvme0n1 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.215 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.474 nvme0n1 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.474 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.475 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.733 nvme0n1 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.733 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.734 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.993 nvme0n1 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:58.993 11:05:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.561 nvme0n1 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.561 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.820 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 nvme0n1 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.079 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.080 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.080 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.080 11:05:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.080 nvme0n1 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.080 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:00.338 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.339 nvme0n1 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.339 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.598 nvme0n1 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.598 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:00.599 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.599 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.599 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:00.857 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:00.858 11:05:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.236 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.495 nvme0n1 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.495 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.496 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.065 nvme0n1 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.065 11:05:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.065 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.325 nvme0n1 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.325 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.894 nvme0n1 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.894 11:05:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.153 nvme0n1 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:04.153 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.154 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.723 nvme0n1 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.723 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.724 11:05:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.289 nvme0n1 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.289 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.548 11:05:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.117 nvme0n1 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.117 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.683 nvme0n1 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:06.683 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.684 11:05:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.250 nvme0n1 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.250 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.510 nvme0n1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.510 nvme0n1 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.510 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.769 nvme0n1 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:07.769 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:07.770 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.028 11:05:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.028 nvme0n1 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:08.028 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:08.029 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:08.029 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:08.029 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:08.029 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:08.029 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.029 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 nvme0n1 00:22:08.287 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.287 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.287 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.287 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.287 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.287 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.287 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.287 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.288 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.549 nvme0n1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.549 nvme0n1 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.549 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.550 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:08.808 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.809 nvme0n1 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:08.809 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:09.068 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:09.068 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:09.068 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:09.068 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:09.068 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:09.068 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:09.068 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.068 11:05:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.068 nvme0n1 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.068 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.327 nvme0n1 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.327 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.620 nvme0n1 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.620 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.890 nvme0n1 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.890 11:05:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.150 nvme0n1 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.150 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.410 nvme0n1 00:22:10.410 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.410 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.410 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.410 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.411 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.671 nvme0n1 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.671 11:05:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.931 nvme0n1 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.932 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:11.191 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.192 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.451 nvme0n1 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.451 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.710 nvme0n1 00:22:11.710 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.710 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:11.710 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.710 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.710 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:11.710 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:11.968 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.969 11:05:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.228 nvme0n1 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:12.228 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.229 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.487 nvme0n1 00:22:12.487 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.746 11:05:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.314 nvme0n1 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.314 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.883 nvme0n1 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.883 11:05:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.883 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.450 nvme0n1 00:22:14.450 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.450 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.450 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:14.450 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.450 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.450 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.709 11:05:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.305 nvme0n1 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.305 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.306 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.878 nvme0n1 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.878 11:05:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.136 nvme0n1 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:16.136 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.137 nvme0n1 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.137 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.407 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.407 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.408 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.409 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.410 nvme0n1 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:16.410 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.411 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.677 nvme0n1 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.677 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.935 nvme0n1 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.936 11:05:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.936 nvme0n1 00:22:16.936 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.936 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.936 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:16.936 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.936 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.936 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 nvme0n1 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.196 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:17.197 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.456 nvme0n1 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:17.456 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.457 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.716 nvme0n1 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.716 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.717 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 nvme0n1 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.977 11:05:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.977 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.242 nvme0n1 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.242 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.503 nvme0n1 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.503 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.504 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.762 nvme0n1 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:18.762 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.763 11:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.023 nvme0n1 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.023 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.282 nvme0n1 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:19.282 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.283 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.542 nvme0n1 00:22:19.542 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.542 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.542 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:19.542 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.542 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:19.802 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.803 11:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.062 nvme0n1 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:20.062 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.063 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.322 nvme0n1 00:22:20.322 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.322 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.322 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.322 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.322 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.322 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.582 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.583 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.893 nvme0n1 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:22:20.893 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.894 11:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.169 nvme0n1 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:21.169 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjkzMzI2Yjc0OGE0NDY4OTE2MGRmNjk0Nzg4Zjc3YjTaKpR7: 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: ]] 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzA3MzEwYjhiMGQyZGYzNDJmYWM1ZDc3NDE1OGZiYTc1YjY1OWNhYWQ2ZTMyZWE3NjU2Y2E5MGU0ZjVmYjkxNplx6x4=: 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.170 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.739 nvme0n1 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:21.739 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:21.740 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.000 11:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.568 nvme0n1 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.568 11:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.137 nvme0n1 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzA0ZTFiYzQ5MDFlMmFkZWZjMzJiNTg4MjUxN2FlZDZkODU4MTUyYmQ1ZmQ1NTM5qUpIUQ==: 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: ]] 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmZhNWIyZjlmZmFlZDY5NTkyZjAwYTdjMDg4ZWJmOGFz/rkj: 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.137 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.138 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.707 nvme0n1 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjEwNGU3ZWFjMzI2Y2FkYTlmY2Q5ZjhjZDU5ZGRkZWNlNWVjZGNhMTUyMzA0ZjhkNjAzYTkyNzM4NWNjMGZhMC4L+J4=: 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.707 11:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.277 nvme0n1 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.277 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:24.278 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.538 request: 00:22:24.538 { 00:22:24.538 "name": "nvme0", 00:22:24.538 "trtype": "tcp", 00:22:24.538 "traddr": "10.0.0.1", 00:22:24.538 "adrfam": "ipv4", 00:22:24.538 "trsvcid": "4420", 00:22:24.538 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:24.538 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:24.538 "prchk_reftag": false, 00:22:24.538 "prchk_guard": false, 00:22:24.538 "hdgst": false, 00:22:24.538 "ddgst": false, 00:22:24.538 "allow_unrecognized_csi": false, 00:22:24.538 "method": "bdev_nvme_attach_controller", 00:22:24.538 "req_id": 1 00:22:24.538 } 00:22:24.538 Got JSON-RPC error response 00:22:24.538 response: 00:22:24.538 { 00:22:24.538 "code": -5, 00:22:24.538 "message": "Input/output error" 00:22:24.538 } 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.538 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.538 request: 00:22:24.538 { 00:22:24.538 "name": "nvme0", 00:22:24.538 "trtype": "tcp", 00:22:24.538 "traddr": "10.0.0.1", 00:22:24.538 "adrfam": "ipv4", 00:22:24.538 "trsvcid": "4420", 00:22:24.538 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:24.538 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:24.538 "prchk_reftag": false, 00:22:24.538 "prchk_guard": false, 00:22:24.538 "hdgst": false, 00:22:24.538 "ddgst": false, 00:22:24.539 "dhchap_key": "key2", 00:22:24.539 "allow_unrecognized_csi": false, 00:22:24.539 "method": "bdev_nvme_attach_controller", 00:22:24.539 "req_id": 1 00:22:24.539 } 00:22:24.539 Got JSON-RPC error response 00:22:24.539 response: 00:22:24.539 { 00:22:24.539 "code": -5, 00:22:24.539 "message": "Input/output error" 00:22:24.539 } 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.539 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.798 request: 00:22:24.798 { 00:22:24.798 "name": "nvme0", 00:22:24.798 "trtype": "tcp", 00:22:24.798 "traddr": "10.0.0.1", 00:22:24.798 "adrfam": "ipv4", 00:22:24.798 "trsvcid": "4420", 00:22:24.798 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:24.798 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:24.798 "prchk_reftag": false, 00:22:24.798 "prchk_guard": false, 00:22:24.798 "hdgst": false, 00:22:24.798 "ddgst": false, 00:22:24.798 "dhchap_key": "key1", 00:22:24.798 "dhchap_ctrlr_key": "ckey2", 00:22:24.798 "allow_unrecognized_csi": false, 00:22:24.798 "method": "bdev_nvme_attach_controller", 00:22:24.798 "req_id": 1 00:22:24.798 } 00:22:24.798 Got JSON-RPC error response 00:22:24.798 response: 00:22:24.798 { 00:22:24.798 "code": -5, 00:22:24.798 "message": "Input/output error" 00:22:24.798 } 00:22:24.798 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:24.798 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:24.798 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.799 nvme0n1 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.799 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.058 request: 00:22:25.058 { 00:22:25.058 "name": "nvme0", 00:22:25.058 "dhchap_key": "key1", 00:22:25.058 "dhchap_ctrlr_key": "ckey2", 00:22:25.058 "method": "bdev_nvme_set_keys", 00:22:25.058 "req_id": 1 00:22:25.058 } 00:22:25.058 Got JSON-RPC error response 00:22:25.058 response: 00:22:25.058 { 00:22:25.058 "code": -13, 00:22:25.058 "message": "Permission denied" 00:22:25.058 } 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.059 11:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.059 11:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:22:25.059 11:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGU1ODZjMWFiMTM5YTg3YTBlMTE3NjA2Y2VlN2VjNjUyZDQzM2JiY2ZiZjlkODViKnbgvg==: 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: ]] 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTBkNGYyOTEzOWZjNmRkNjdlN2JmZjI0OWI1NWI4ZDkyZDY5ZmI4ZDNjYzFmMjFk0XzO4g==: 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.997 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.257 nvme0n1 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWMyYzQxMzViNjEzOGU1ZDA4N2RmZGI2ZTY0MzhkMjJUNFt8: 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: ]] 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDYzMzFiYzA0MmUyMmE5ZTNiMTgyOGIzM2QzZWVhNTa4m5Km: 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.257 request: 00:22:26.257 { 00:22:26.257 "name": "nvme0", 00:22:26.257 "dhchap_key": "key2", 00:22:26.257 "dhchap_ctrlr_key": "ckey1", 00:22:26.257 "method": "bdev_nvme_set_keys", 00:22:26.257 "req_id": 1 00:22:26.257 } 00:22:26.257 Got JSON-RPC error response 00:22:26.257 response: 00:22:26.257 { 00:22:26.257 "code": -13, 00:22:26.257 "message": "Permission denied" 00:22:26.257 } 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:22:26.257 11:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:27.194 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:27.194 rmmod nvme_tcp 00:22:27.194 rmmod nvme_fabrics 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 78253 ']' 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 78253 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78253 ']' 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78253 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78253 00:22:27.452 killing process with pid 78253 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78253' 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78253 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78253 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@254 -- # local dev 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:22:27.452 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:27.453 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:27.453 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # continue 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # continue 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@274 -- # iptr 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-save 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-restore 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:22:27.711 11:05:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:28.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:28.648 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:28.914 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:28.914 11:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Elg /tmp/spdk.key-null.NZd /tmp/spdk.key-sha256.tjx /tmp/spdk.key-sha384.x3a /tmp/spdk.key-sha512.Ogh /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:22:28.914 11:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:29.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:29.487 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:29.487 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:29.487 00:22:29.487 real 0m37.905s 00:22:29.487 user 0m35.260s 00:22:29.487 sys 0m6.154s 00:22:29.487 11:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.487 11:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.487 ************************************ 00:22:29.487 END TEST nvmf_auth_host 00:22:29.487 ************************************ 00:22:29.487 11:05:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:22:29.487 11:05:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:29.487 11:05:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.487 11:05:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.487 11:05:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.487 ************************************ 00:22:29.487 START TEST nvmf_digest 00:22:29.487 ************************************ 00:22:29.487 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:29.748 * Looking for test storage... 00:22:29.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:29.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.748 --rc genhtml_branch_coverage=1 00:22:29.748 --rc genhtml_function_coverage=1 00:22:29.748 --rc genhtml_legend=1 00:22:29.748 --rc geninfo_all_blocks=1 00:22:29.748 --rc geninfo_unexecuted_blocks=1 00:22:29.748 00:22:29.748 ' 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:29.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.748 --rc genhtml_branch_coverage=1 00:22:29.748 --rc genhtml_function_coverage=1 00:22:29.748 --rc genhtml_legend=1 00:22:29.748 --rc geninfo_all_blocks=1 00:22:29.748 --rc geninfo_unexecuted_blocks=1 00:22:29.748 00:22:29.748 ' 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:29.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.748 --rc genhtml_branch_coverage=1 00:22:29.748 --rc genhtml_function_coverage=1 00:22:29.748 --rc genhtml_legend=1 00:22:29.748 --rc geninfo_all_blocks=1 00:22:29.748 --rc geninfo_unexecuted_blocks=1 00:22:29.748 00:22:29.748 ' 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:29.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.748 --rc genhtml_branch_coverage=1 00:22:29.748 --rc genhtml_function_coverage=1 00:22:29.748 --rc genhtml_legend=1 00:22:29.748 --rc geninfo_all_blocks=1 00:22:29.748 --rc geninfo_unexecuted_blocks=1 00:22:29.748 00:22:29.748 ' 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.748 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.749 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:30.008 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:30.009 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@280 -- # nvmf_veth_init 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@223 -- # create_target_ns 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # create_main_bridge 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@105 -- # delete_main_bridge 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # return 0 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up initiator0 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:22:30.009 11:05:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up target0 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0 up 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up target0_br 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns target0 00:22:30.009 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:30.010 10.0.0.1 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:22:30.010 10.0.0.2 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up initiator0 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up target0_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up initiator1 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up target1 00:22:30.010 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:22:30.011 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.011 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:22:30.011 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1 up 00:22:30.011 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up target1_br 00:22:30.011 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:30.011 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.011 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:30.011 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns target1 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772163 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:22:30.271 10.0.0.3 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772164 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:22:30.271 10.0.0.4 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up initiator1 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up target1_br 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 2 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:22:30.271 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:30.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:22:30.272 00:22:30.272 --- 10.0.0.1 ping statistics --- 00:22:30.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.272 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target0 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:30.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:22:30.272 00:22:30.272 --- 10.0.0.2 ping statistics --- 00:22:30.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.272 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:22:30.272 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:30.272 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:22:30.272 00:22:30.272 --- 10.0.0.3 ping statistics --- 00:22:30.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.272 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target1 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:30.272 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:22:30.532 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:30.532 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.111 ms 00:22:30.532 00:22:30.532 --- 10.0.0.4 ping statistics --- 00:22:30.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.532 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # return 0 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator0 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:30.532 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target0 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target0 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target1 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target1 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:30.533 ************************************ 00:22:30.533 START TEST nvmf_digest_clean 00:22:30.533 ************************************ 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=80161 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 80161 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80161 ']' 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.533 11:05:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:30.533 [2024-12-05 11:05:57.642600] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:30.534 [2024-12-05 11:05:57.642666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.791 [2024-12-05 11:05:57.795732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.791 [2024-12-05 11:05:57.849202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.791 [2024-12-05 11:05:57.849261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.791 [2024-12-05 11:05:57.849289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.791 [2024-12-05 11:05:57.849303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.791 [2024-12-05 11:05:57.849315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.791 [2024-12-05 11:05:57.849654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:31.723 [2024-12-05 11:05:58.683235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:31.723 null0 00:22:31.723 [2024-12-05 11:05:58.729187] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.723 [2024-12-05 11:05:58.753249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80193 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80193 /var/tmp/bperf.sock 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80193 ']' 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:31.723 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.724 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:31.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:31.724 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.724 11:05:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:31.724 [2024-12-05 11:05:58.809869] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:31.724 [2024-12-05 11:05:58.809964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80193 ] 00:22:31.982 [2024-12-05 11:05:58.952132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.982 [2024-12-05 11:05:59.003105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.917 11:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.917 11:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:32.917 11:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:32.917 11:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:32.917 11:05:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:32.917 [2024-12-05 11:05:59.951983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:32.917 11:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:32.917 11:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:33.176 nvme0n1 00:22:33.176 11:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:33.176 11:06:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:33.435 Running I/O for 2 seconds... 00:22:35.321 19177.00 IOPS, 74.91 MiB/s [2024-12-05T11:06:02.480Z] 19304.00 IOPS, 75.41 MiB/s 00:22:35.321 Latency(us) 00:22:35.321 [2024-12-05T11:06:02.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.321 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:35.321 nvme0n1 : 2.01 19317.39 75.46 0.00 0.00 6621.74 6106.17 19476.56 00:22:35.321 [2024-12-05T11:06:02.480Z] =================================================================================================================== 00:22:35.321 [2024-12-05T11:06:02.480Z] Total : 19317.39 75.46 0.00 0.00 6621.74 6106.17 19476.56 00:22:35.321 { 00:22:35.321 "results": [ 00:22:35.321 { 00:22:35.321 "job": "nvme0n1", 00:22:35.321 "core_mask": "0x2", 00:22:35.321 "workload": "randread", 00:22:35.321 "status": "finished", 00:22:35.321 "queue_depth": 128, 00:22:35.321 "io_size": 4096, 00:22:35.321 "runtime": 2.00524, 00:22:35.321 "iops": 19317.388442281223, 00:22:35.321 "mibps": 75.45854860266103, 00:22:35.321 "io_failed": 0, 00:22:35.321 "io_timeout": 0, 00:22:35.321 "avg_latency_us": 6621.737228965428, 00:22:35.321 "min_latency_us": 6106.165461847389, 00:22:35.321 "max_latency_us": 19476.562248995982 00:22:35.321 } 00:22:35.321 ], 00:22:35.321 "core_count": 1 00:22:35.321 } 00:22:35.321 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:35.321 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:35.321 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:35.321 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:35.321 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:35.321 | select(.opcode=="crc32c") 00:22:35.321 | "\(.module_name) \(.executed)"' 00:22:35.580 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:35.580 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:35.580 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:35.580 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:35.580 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80193 00:22:35.580 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80193 ']' 00:22:35.581 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80193 00:22:35.581 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:35.581 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.581 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80193 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.840 killing process with pid 80193 00:22:35.840 Received shutdown signal, test time was about 2.000000 seconds 00:22:35.840 00:22:35.840 Latency(us) 00:22:35.840 [2024-12-05T11:06:02.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.840 [2024-12-05T11:06:02.999Z] =================================================================================================================== 00:22:35.840 [2024-12-05T11:06:02.999Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80193' 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80193 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80193 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80248 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80248 /var/tmp/bperf.sock 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80248 ']' 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:35.840 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:35.841 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:35.841 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.841 11:06:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:35.841 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:35.841 Zero copy mechanism will not be used. 00:22:35.841 [2024-12-05 11:06:02.968257] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:35.841 [2024-12-05 11:06:02.968343] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80248 ] 00:22:36.100 [2024-12-05 11:06:03.120620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.100 [2024-12-05 11:06:03.170395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.669 11:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.669 11:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:36.669 11:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:36.669 11:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:36.669 11:06:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:36.928 [2024-12-05 11:06:04.047229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:37.187 11:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.187 11:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:37.446 nvme0n1 00:22:37.446 11:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:37.446 11:06:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:37.446 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:37.446 Zero copy mechanism will not be used. 00:22:37.446 Running I/O for 2 seconds... 00:22:39.340 8688.00 IOPS, 1086.00 MiB/s [2024-12-05T11:06:06.499Z] 8736.00 IOPS, 1092.00 MiB/s 00:22:39.340 Latency(us) 00:22:39.340 [2024-12-05T11:06:06.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.340 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:39.340 nvme0n1 : 2.00 8734.80 1091.85 0.00 0.00 1829.15 1737.10 10475.23 00:22:39.340 [2024-12-05T11:06:06.499Z] =================================================================================================================== 00:22:39.340 [2024-12-05T11:06:06.499Z] Total : 8734.80 1091.85 0.00 0.00 1829.15 1737.10 10475.23 00:22:39.340 { 00:22:39.340 "results": [ 00:22:39.340 { 00:22:39.340 "job": "nvme0n1", 00:22:39.340 "core_mask": "0x2", 00:22:39.340 "workload": "randread", 00:22:39.340 "status": "finished", 00:22:39.340 "queue_depth": 16, 00:22:39.340 "io_size": 131072, 00:22:39.340 "runtime": 2.002106, 00:22:39.340 "iops": 8734.802253227352, 00:22:39.340 "mibps": 1091.850281653419, 00:22:39.340 "io_failed": 0, 00:22:39.340 "io_timeout": 0, 00:22:39.340 "avg_latency_us": 1829.146935041171, 00:22:39.340 "min_latency_us": 1737.0987951807228, 00:22:39.340 "max_latency_us": 10475.232128514057 00:22:39.340 } 00:22:39.340 ], 00:22:39.340 "core_count": 1 00:22:39.340 } 00:22:39.340 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:39.340 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:39.340 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:39.340 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:39.340 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:39.340 | select(.opcode=="crc32c") 00:22:39.340 | "\(.module_name) \(.executed)"' 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80248 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80248 ']' 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80248 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80248 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:39.600 killing process with pid 80248 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80248' 00:22:39.600 Received shutdown signal, test time was about 2.000000 seconds 00:22:39.600 00:22:39.600 Latency(us) 00:22:39.600 [2024-12-05T11:06:06.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.600 [2024-12-05T11:06:06.759Z] =================================================================================================================== 00:22:39.600 [2024-12-05T11:06:06.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80248 00:22:39.600 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80248 00:22:39.859 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:22:39.859 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:39.859 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:39.859 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80308 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80308 /var/tmp/bperf.sock 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80308 ']' 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:39.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.860 11:06:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:39.860 [2024-12-05 11:06:06.964494] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:39.860 [2024-12-05 11:06:06.965092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80308 ] 00:22:40.119 [2024-12-05 11:06:07.114320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.119 [2024-12-05 11:06:07.160797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.687 11:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.687 11:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:40.687 11:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:40.687 11:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:40.687 11:06:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:40.947 [2024-12-05 11:06:08.049955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:40.947 11:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.947 11:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:41.517 nvme0n1 00:22:41.517 11:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:41.517 11:06:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:41.517 Running I/O for 2 seconds... 00:22:43.394 20956.00 IOPS, 81.86 MiB/s [2024-12-05T11:06:10.553Z] 20955.50 IOPS, 81.86 MiB/s 00:22:43.394 Latency(us) 00:22:43.394 [2024-12-05T11:06:10.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.394 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:43.394 nvme0n1 : 2.00 20986.41 81.98 0.00 0.00 6094.45 3474.20 12212.33 00:22:43.394 [2024-12-05T11:06:10.553Z] =================================================================================================================== 00:22:43.394 [2024-12-05T11:06:10.553Z] Total : 20986.41 81.98 0.00 0.00 6094.45 3474.20 12212.33 00:22:43.394 { 00:22:43.394 "results": [ 00:22:43.394 { 00:22:43.394 "job": "nvme0n1", 00:22:43.394 "core_mask": "0x2", 00:22:43.394 "workload": "randwrite", 00:22:43.394 "status": "finished", 00:22:43.394 "queue_depth": 128, 00:22:43.394 "io_size": 4096, 00:22:43.394 "runtime": 2.003153, 00:22:43.394 "iops": 20986.414916883532, 00:22:43.394 "mibps": 81.9781832690763, 00:22:43.394 "io_failed": 0, 00:22:43.394 "io_timeout": 0, 00:22:43.394 "avg_latency_us": 6094.454679232164, 00:22:43.394 "min_latency_us": 3474.1975903614457, 00:22:43.394 "max_latency_us": 12212.330923694779 00:22:43.394 } 00:22:43.394 ], 00:22:43.394 "core_count": 1 00:22:43.394 } 00:22:43.394 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:43.394 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:43.394 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:43.394 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:43.394 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:43.394 | select(.opcode=="crc32c") 00:22:43.394 | "\(.module_name) \(.executed)"' 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80308 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80308 ']' 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80308 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80308 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:43.654 killing process with pid 80308 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80308' 00:22:43.654 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80308 00:22:43.654 Received shutdown signal, test time was about 2.000000 seconds 00:22:43.654 00:22:43.654 Latency(us) 00:22:43.654 [2024-12-05T11:06:10.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.655 [2024-12-05T11:06:10.814Z] =================================================================================================================== 00:22:43.655 [2024-12-05T11:06:10.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.655 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80308 00:22:43.913 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:22:43.913 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:43.913 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:43.913 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:43.913 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:43.913 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:43.913 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80364 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80364 /var/tmp/bperf.sock 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80364 ']' 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.914 11:06:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:43.914 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:43.914 Zero copy mechanism will not be used. 00:22:43.914 [2024-12-05 11:06:10.991180] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:43.914 [2024-12-05 11:06:10.991252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80364 ] 00:22:44.172 [2024-12-05 11:06:11.134458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.172 [2024-12-05 11:06:11.185544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.741 11:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.741 11:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:44.741 11:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:44.741 11:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:44.741 11:06:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:45.001 [2024-12-05 11:06:12.070134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:45.001 11:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.001 11:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:45.260 nvme0n1 00:22:45.260 11:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:45.260 11:06:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:45.519 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:45.519 Zero copy mechanism will not be used. 00:22:45.519 Running I/O for 2 seconds... 00:22:47.388 8927.00 IOPS, 1115.88 MiB/s [2024-12-05T11:06:14.547Z] 8947.50 IOPS, 1118.44 MiB/s 00:22:47.388 Latency(us) 00:22:47.388 [2024-12-05T11:06:14.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.388 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:47.388 nvme0n1 : 2.00 8942.55 1117.82 0.00 0.00 1785.73 1223.87 3184.68 00:22:47.388 [2024-12-05T11:06:14.547Z] =================================================================================================================== 00:22:47.388 [2024-12-05T11:06:14.547Z] Total : 8942.55 1117.82 0.00 0.00 1785.73 1223.87 3184.68 00:22:47.388 { 00:22:47.388 "results": [ 00:22:47.388 { 00:22:47.388 "job": "nvme0n1", 00:22:47.388 "core_mask": "0x2", 00:22:47.388 "workload": "randwrite", 00:22:47.388 "status": "finished", 00:22:47.388 "queue_depth": 16, 00:22:47.388 "io_size": 131072, 00:22:47.388 "runtime": 2.002784, 00:22:47.388 "iops": 8942.551967661017, 00:22:47.388 "mibps": 1117.818995957627, 00:22:47.388 "io_failed": 0, 00:22:47.388 "io_timeout": 0, 00:22:47.388 "avg_latency_us": 1785.7288470016304, 00:22:47.388 "min_latency_us": 1223.8650602409639, 00:22:47.388 "max_latency_us": 3184.681124497992 00:22:47.388 } 00:22:47.388 ], 00:22:47.388 "core_count": 1 00:22:47.388 } 00:22:47.388 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:47.388 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:47.388 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:47.388 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:47.388 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:47.388 | select(.opcode=="crc32c") 00:22:47.388 | "\(.module_name) \(.executed)"' 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80364 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80364 ']' 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80364 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80364 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.647 killing process with pid 80364 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80364' 00:22:47.647 Received shutdown signal, test time was about 2.000000 seconds 00:22:47.647 00:22:47.647 Latency(us) 00:22:47.647 [2024-12-05T11:06:14.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.647 [2024-12-05T11:06:14.806Z] =================================================================================================================== 00:22:47.647 [2024-12-05T11:06:14.806Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80364 00:22:47.647 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80364 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80161 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80161 ']' 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80161 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80161 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.905 killing process with pid 80161 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.905 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80161' 00:22:47.906 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80161 00:22:47.906 11:06:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80161 00:22:48.163 00:22:48.163 real 0m17.573s 00:22:48.163 user 0m32.904s 00:22:48.163 sys 0m5.246s 00:22:48.163 ************************************ 00:22:48.163 END TEST nvmf_digest_clean 00:22:48.163 ************************************ 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:48.163 ************************************ 00:22:48.163 START TEST nvmf_digest_error 00:22:48.163 ************************************ 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.163 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=80447 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 80447 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80447 ']' 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.164 11:06:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:48.164 [2024-12-05 11:06:15.293757] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:48.164 [2024-12-05 11:06:15.293824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.421 [2024-12-05 11:06:15.446758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.421 [2024-12-05 11:06:15.496813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.421 [2024-12-05 11:06:15.496870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.421 [2024-12-05 11:06:15.496881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.421 [2024-12-05 11:06:15.496889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.421 [2024-12-05 11:06:15.496896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.421 [2024-12-05 11:06:15.497173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.357 [2024-12-05 11:06:16.240441] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.357 [2024-12-05 11:06:16.293575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:49.357 null0 00:22:49.357 [2024-12-05 11:06:16.340587] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.357 [2024-12-05 11:06:16.364667] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80482 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80482 /var/tmp/bperf.sock 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80482 ']' 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:49.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.357 11:06:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:49.357 [2024-12-05 11:06:16.422620] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:49.357 [2024-12-05 11:06:16.422699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80482 ] 00:22:49.617 [2024-12-05 11:06:16.574472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.617 [2024-12-05 11:06:16.626299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.617 [2024-12-05 11:06:16.668224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:50.184 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:50.184 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:50.184 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:50.185 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:50.444 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:50.444 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.444 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:50.444 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.444 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.444 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:50.703 nvme0n1 00:22:50.703 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:50.703 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.703 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:50.703 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.961 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:50.961 11:06:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:50.962 Running I/O for 2 seconds... 00:22:50.962 [2024-12-05 11:06:17.978269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:17.978342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:17.978356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:17.991730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:17.991778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:17.991791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.005150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.005194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.005206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.018332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.018369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.018381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.031484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.031519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.031531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.044619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.044655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.044667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.057718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.057754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.057765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.070880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.070916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.070927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.084100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.084136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.084148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.097361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.097396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.097407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:50.962 [2024-12-05 11:06:18.110578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:50.962 [2024-12-05 11:06:18.110612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:50.962 [2024-12-05 11:06:18.110623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.123878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.123916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.123927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.137014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.137049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.137060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.150270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.150314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.150325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.163499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.163535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.163546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.176912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.176948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.176959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.190271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.190318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.190330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.203384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.203421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.203432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.216500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.216535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.216546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.229599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.229655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.229666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.242745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.242782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.242794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.255861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.255898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.255909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.268962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.268997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.269008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.282082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.282116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.282128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.295191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.295227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.295238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.308308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.308345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.308356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.321491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.321531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.321541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.334629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.334669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.334680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.347763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.347797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.347808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.360865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.360899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.360910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.221 [2024-12-05 11:06:18.373969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.221 [2024-12-05 11:06:18.374002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.221 [2024-12-05 11:06:18.374014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.480 [2024-12-05 11:06:18.387072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.480 [2024-12-05 11:06:18.387106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.480 [2024-12-05 11:06:18.387117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.480 [2024-12-05 11:06:18.400178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.480 [2024-12-05 11:06:18.400213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.480 [2024-12-05 11:06:18.400224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.480 [2024-12-05 11:06:18.413295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.480 [2024-12-05 11:06:18.413328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.480 [2024-12-05 11:06:18.413339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.480 [2024-12-05 11:06:18.426403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.480 [2024-12-05 11:06:18.426435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.480 [2024-12-05 11:06:18.426446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.480 [2024-12-05 11:06:18.439508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.480 [2024-12-05 11:06:18.439540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.439551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.452623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.452655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.452667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.465796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.465829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.465840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.478911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.478943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.478954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.492008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.492041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.492052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.505158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.505191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.505203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.518350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.518386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.518413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.531924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.531961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.531972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.545078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.545117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.545128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.558184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.558220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.558231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.571305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.571339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.571351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.584417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.584450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.584461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.597512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.597544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.597556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.610607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.610641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.610652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.623721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.623754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.623765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.481 [2024-12-05 11:06:18.636805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.481 [2024-12-05 11:06:18.636838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.481 [2024-12-05 11:06:18.636848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.649934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.739 [2024-12-05 11:06:18.649974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-12-05 11:06:18.649985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.663072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.739 [2024-12-05 11:06:18.663117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-12-05 11:06:18.663128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.676196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.739 [2024-12-05 11:06:18.676235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-12-05 11:06:18.676246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.689303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.739 [2024-12-05 11:06:18.689336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-12-05 11:06:18.689347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.702413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.739 [2024-12-05 11:06:18.702446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-12-05 11:06:18.702457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.715519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.739 [2024-12-05 11:06:18.715553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-12-05 11:06:18.715564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.728612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.739 [2024-12-05 11:06:18.728648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-12-05 11:06:18.728659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.741717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.739 [2024-12-05 11:06:18.741751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.739 [2024-12-05 11:06:18.741763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.739 [2024-12-05 11:06:18.754883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.754918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.754928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.768027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.768064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.768075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.781134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.781168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.781179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.794226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.794260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.794271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.813065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.813099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.813110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.826168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.826201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.826212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.839402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.839439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.839451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.852496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.852531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.852542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.865607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.865644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.865655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.878730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.878766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.878777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.740 [2024-12-05 11:06:18.891988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.740 [2024-12-05 11:06:18.892024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.740 [2024-12-05 11:06:18.892036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.999 [2024-12-05 11:06:18.905388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.999 [2024-12-05 11:06:18.905425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.999 [2024-12-05 11:06:18.905437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.999 [2024-12-05 11:06:18.918508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.999 [2024-12-05 11:06:18.918543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.999 [2024-12-05 11:06:18.918554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.999 [2024-12-05 11:06:18.931602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.999 [2024-12-05 11:06:18.931637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.999 [2024-12-05 11:06:18.931648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.999 [2024-12-05 11:06:18.944706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:51.999 [2024-12-05 11:06:18.944740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.999 [2024-12-05 11:06:18.944750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.999 19103.00 IOPS, 74.62 MiB/s [2024-12-05T11:06:19.158Z] [2024-12-05 11:06:18.959070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:18.959105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:18.959116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:18.972185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:18.972220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:18.972231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:18.985296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:18.985332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:18.985343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:18.998485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:18.998527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:18.998538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.011611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.011650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.011662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.024714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.024748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.024759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.037982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.038015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.038026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.051165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.051198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.051209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.064297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.064332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.064343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.077406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.077440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.077450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.090604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.090637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.090647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.103705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.103739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.103750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.116816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.116850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.116861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.129915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.129949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.129960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.143115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.143152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.143163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.000 [2024-12-05 11:06:19.156417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.000 [2024-12-05 11:06:19.156453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.000 [2024-12-05 11:06:19.156465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.169659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.169694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.169706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.182826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.182861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.182873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.195956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.195989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.196000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.209069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.209106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.209117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.222284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.222320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.222331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.235389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.235423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.235434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.248641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.248676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.248688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.262625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.262664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.262675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.276616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.276658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.276670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.290582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.290623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.290635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.304146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.304185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.304196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.317554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.317588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.317598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.330743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.330776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.258 [2024-12-05 11:06:19.330787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.258 [2024-12-05 11:06:19.343869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.258 [2024-12-05 11:06:19.343910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.259 [2024-12-05 11:06:19.343921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.259 [2024-12-05 11:06:19.357005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.259 [2024-12-05 11:06:19.357043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.259 [2024-12-05 11:06:19.357055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.259 [2024-12-05 11:06:19.370229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.259 [2024-12-05 11:06:19.370262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.259 [2024-12-05 11:06:19.370282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.259 [2024-12-05 11:06:19.383365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.259 [2024-12-05 11:06:19.383405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.259 [2024-12-05 11:06:19.383416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.259 [2024-12-05 11:06:19.396579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.259 [2024-12-05 11:06:19.396627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.259 [2024-12-05 11:06:19.396638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.259 [2024-12-05 11:06:19.410695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.259 [2024-12-05 11:06:19.410732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.259 [2024-12-05 11:06:19.410744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.424362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.424397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.424408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.437670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.437702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.437713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.450843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.450878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.450889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.463934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.463969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.463980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.477041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.477073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.477084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.490422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.490454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.490465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.503626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.503656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.503667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.516734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.516766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.516777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.529832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.529865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.529876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.543123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.543157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.543167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.556221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.556254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.556264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.569439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.569474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.569484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.582661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.582697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.582708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.596189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.596224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.596235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.609372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.516 [2024-12-05 11:06:19.609408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.516 [2024-12-05 11:06:19.609419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.516 [2024-12-05 11:06:19.622591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.517 [2024-12-05 11:06:19.622624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.517 [2024-12-05 11:06:19.622636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.517 [2024-12-05 11:06:19.636277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.517 [2024-12-05 11:06:19.636320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.517 [2024-12-05 11:06:19.636331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.517 [2024-12-05 11:06:19.649815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.517 [2024-12-05 11:06:19.649850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.517 [2024-12-05 11:06:19.649860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.517 [2024-12-05 11:06:19.669338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.517 [2024-12-05 11:06:19.669382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.517 [2024-12-05 11:06:19.669394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.775 [2024-12-05 11:06:19.683404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.775 [2024-12-05 11:06:19.683447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.683460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.697654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.697694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.697706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.711989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.712029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.712042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.726173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.726230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.726242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.739835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.739879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.739890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.753170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.753215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.753227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.766312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.766348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.766358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.779409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.779445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.779457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.792519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.792557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.792567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.805619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.805657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.805668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.818819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.818854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.818865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.831954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.831989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.831999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.845072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.845108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.845119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.858200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.858237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.858248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.871329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.871369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.871380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.884424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.884459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.884470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.897608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.897651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.897663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.910776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.910816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.910828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.776 [2024-12-05 11:06:19.924156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:52.776 [2024-12-05 11:06:19.924196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.776 [2024-12-05 11:06:19.924208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.034 [2024-12-05 11:06:19.937379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:53.034 [2024-12-05 11:06:19.937417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.034 [2024-12-05 11:06:19.937428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.034 [2024-12-05 11:06:19.951035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xde5fb0) 00:22:53.034 [2024-12-05 11:06:19.951081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.034 [2024-12-05 11:06:19.951094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.035 19039.00 IOPS, 74.37 MiB/s 00:22:53.035 Latency(us) 00:22:53.035 [2024-12-05T11:06:20.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.035 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:53.035 nvme0n1 : 2.01 19006.01 74.24 0.00 0.00 6730.31 6474.64 25372.17 00:22:53.035 [2024-12-05T11:06:20.194Z] =================================================================================================================== 00:22:53.035 [2024-12-05T11:06:20.194Z] Total : 19006.01 74.24 0.00 0.00 6730.31 6474.64 25372.17 00:22:53.035 { 00:22:53.035 "results": [ 00:22:53.035 { 00:22:53.035 "job": "nvme0n1", 00:22:53.035 "core_mask": "0x2", 00:22:53.035 "workload": "randread", 00:22:53.035 "status": "finished", 00:22:53.035 "queue_depth": 128, 00:22:53.035 "io_size": 4096, 00:22:53.035 "runtime": 2.010206, 00:22:53.035 "iops": 19006.01231913545, 00:22:53.035 "mibps": 74.24223562162285, 00:22:53.035 "io_failed": 0, 00:22:53.035 "io_timeout": 0, 00:22:53.035 "avg_latency_us": 6730.313709468035, 00:22:53.035 "min_latency_us": 6474.640963855421, 00:22:53.035 "max_latency_us": 25372.170281124498 00:22:53.035 } 00:22:53.035 ], 00:22:53.035 "core_count": 1 00:22:53.035 } 00:22:53.035 11:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:53.035 11:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:53.035 11:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:53.035 | .driver_specific 00:22:53.035 | .nvme_error 00:22:53.035 | .status_code 00:22:53.035 | .command_transient_transport_error' 00:22:53.035 11:06:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 149 > 0 )) 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80482 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80482 ']' 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80482 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80482 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:53.294 killing process with pid 80482 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80482' 00:22:53.294 Received shutdown signal, test time was about 2.000000 seconds 00:22:53.294 00:22:53.294 Latency(us) 00:22:53.294 [2024-12-05T11:06:20.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.294 [2024-12-05T11:06:20.453Z] =================================================================================================================== 00:22:53.294 [2024-12-05T11:06:20.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80482 00:22:53.294 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80482 00:22:53.553 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:53.553 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:53.553 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:53.553 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:53.553 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:53.553 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80540 00:22:53.554 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:53.554 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80540 /var/tmp/bperf.sock 00:22:53.554 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80540 ']' 00:22:53.554 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:53.554 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.554 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:53.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:53.554 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.554 11:06:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:53.554 [2024-12-05 11:06:20.549169] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:53.554 [2024-12-05 11:06:20.549451] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80540 ] 00:22:53.554 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:53.554 Zero copy mechanism will not be used. 00:22:53.554 [2024-12-05 11:06:20.695440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.812 [2024-12-05 11:06:20.773046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.812 [2024-12-05 11:06:20.847788] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:54.381 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.381 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:54.381 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:54.381 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:54.641 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:54.641 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.641 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:54.641 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.641 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.641 11:06:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.919 nvme0n1 00:22:54.919 11:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:54.920 11:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.920 11:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:55.179 11:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.179 11:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:55.179 11:06:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:55.179 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:55.179 Zero copy mechanism will not be used. 00:22:55.179 Running I/O for 2 seconds... 00:22:55.179 [2024-12-05 11:06:22.211913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.179 [2024-12-05 11:06:22.212008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.212025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.217083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.217381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.217495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.222413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.222672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.222903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.227736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.227992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.228124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.232980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.233190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.233336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.238139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.238378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.238487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.243426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.243651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.243770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.248508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.248704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.248726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.253506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.253556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.253572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.258621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.258672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.258689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.263557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.263780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.263798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.268618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.268665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.268678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.273415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.273459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.273473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.277977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.278022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.278035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.282733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.282959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.282977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.287576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.287619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.287632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.292202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.180 [2024-12-05 11:06:22.292245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.180 [2024-12-05 11:06:22.292258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.180 [2024-12-05 11:06:22.296963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.297003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.297016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.301748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.301958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.301975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.306455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.306496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.306509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.310850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.310889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.310901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.315158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.315197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.315209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.319487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.319525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.319537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.323811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.323848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.323861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.328125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.328161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.328174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.332473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.332511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.332523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.181 [2024-12-05 11:06:22.336828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.181 [2024-12-05 11:06:22.336868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.181 [2024-12-05 11:06:22.336880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.441 [2024-12-05 11:06:22.341261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.441 [2024-12-05 11:06:22.341308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.441 [2024-12-05 11:06:22.341320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.441 [2024-12-05 11:06:22.345667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.441 [2024-12-05 11:06:22.345704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.441 [2024-12-05 11:06:22.345716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.441 [2024-12-05 11:06:22.350027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.441 [2024-12-05 11:06:22.350065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.441 [2024-12-05 11:06:22.350085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.441 [2024-12-05 11:06:22.354416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.441 [2024-12-05 11:06:22.354454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.441 [2024-12-05 11:06:22.354466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.358765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.358802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.358814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.363117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.363153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.363165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.367471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.367679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.367696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.372037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.372076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.372089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.376395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.376432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.376445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.380673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.380710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.380723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.384970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.385009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.385021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.389325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.389386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.389400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.393707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.393745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.393757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.398131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.398167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.398179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.402633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.402839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.402855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.407168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.407206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.407218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.411609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.411646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.411659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.416183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.416219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.416232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.420629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.420666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.420679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.425112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.425159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.425171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.429715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.429883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.429900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.434236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.434289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.434303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.438704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.438741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.438753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.443107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.443145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.443158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.447679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.447727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.447739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.452156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.452194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.452206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.456638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.456675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.456687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.461183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.461220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.461233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.465591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.465628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.465640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.469988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.470026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.470039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.474475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.442 [2024-12-05 11:06:22.474511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.442 [2024-12-05 11:06:22.474523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.442 [2024-12-05 11:06:22.478893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.478930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.478943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.483154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.483191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.483204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.487463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.487499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.487511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.491739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.491910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.491928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.496289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.496325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.496337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.500718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.500755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.500767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.505115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.505152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.505165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.509439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.509476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.509488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.513739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.513776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.513788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.518040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.518085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.518098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.522391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.522427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.522439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.526669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.526706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.526719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.531012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.531049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.531060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.535361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.535398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.535410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.539740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.539779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.539791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.544555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.544592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.544604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.548933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.548971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.548983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.553329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.553366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.553378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.557740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.557778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.557791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.562127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.562163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.562176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.566541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.566579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.566592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.570895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.570932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.570945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.575305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.575339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.575351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.579648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.579684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.579697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.584005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.584044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.584056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.588437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.588475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.588488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.592813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.443 [2024-12-05 11:06:22.592848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.443 [2024-12-05 11:06:22.592860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.443 [2024-12-05 11:06:22.597117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.444 [2024-12-05 11:06:22.597152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.444 [2024-12-05 11:06:22.597164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.601412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.601448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.601460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.605759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.605795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.605806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.610065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.610110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.610123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.614463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.614498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.614510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.618770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.618806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.618818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.623152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.623187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.623200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.627599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.627634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.627646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.632101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.632137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.632148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.636577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.636613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.636624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.640999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.641036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.641049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.645465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.645501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.645513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.649759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.649804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.649816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.654328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.654363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.654375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.658836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.704 [2024-12-05 11:06:22.658871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.704 [2024-12-05 11:06:22.658883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.704 [2024-12-05 11:06:22.663169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.663204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.663217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.667470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.667505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.667517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.671936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.671972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.671984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.676288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.676321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.676334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.680664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.680700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.680714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.685084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.685119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.685131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.689529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.689566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.689578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.693968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.694003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.694015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.698334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.698368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.698380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.702839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.702875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.702887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.707264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.707309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.707321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.711556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.711590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.711602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.715943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.715978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.715991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.720230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.720267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.720294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.724617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.724654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.724666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.728941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.728977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.728989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.733603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.733638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.733650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.738021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.738055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.738067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.742407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.742441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.742453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.746841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.746877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.746889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.751335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.751369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.751381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.755650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.705 [2024-12-05 11:06:22.755685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.705 [2024-12-05 11:06:22.755698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.705 [2024-12-05 11:06:22.760401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.760438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.760450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.764822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.764858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.764869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.769251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.769299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.769312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.773585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.773621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.773633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.777981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.778019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.778031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.782446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.782482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.782494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.786868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.786904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.786916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.791339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.791373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.791386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.795733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.795769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.795781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.800123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.800160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.800172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.804519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.804554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.804566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.808947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.808982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.808995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.813328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.813363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.813376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.817713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.817749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.817762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.822137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.822172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.822184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.826574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.826611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.826623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.831058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.831094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.831106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.835465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.835501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.835513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.839884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.839919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.839931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.844304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.844339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.844351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.848713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.848749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.848761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.853133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.853169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.706 [2024-12-05 11:06:22.853181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.706 [2024-12-05 11:06:22.857555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.706 [2024-12-05 11:06:22.857590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.707 [2024-12-05 11:06:22.857603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.707 [2024-12-05 11:06:22.862028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.707 [2024-12-05 11:06:22.862063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.707 [2024-12-05 11:06:22.862084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.967 [2024-12-05 11:06:22.866529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.967 [2024-12-05 11:06:22.866565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.967 [2024-12-05 11:06:22.866577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.967 [2024-12-05 11:06:22.870945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.967 [2024-12-05 11:06:22.870980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.967 [2024-12-05 11:06:22.870992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.967 [2024-12-05 11:06:22.875270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.967 [2024-12-05 11:06:22.875315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.967 [2024-12-05 11:06:22.875327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.967 [2024-12-05 11:06:22.879645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.967 [2024-12-05 11:06:22.879680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.967 [2024-12-05 11:06:22.879692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.967 [2024-12-05 11:06:22.884076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.967 [2024-12-05 11:06:22.884111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.884123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.888506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.888541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.888553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.892930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.892966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.892979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.897249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.897296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.897308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.901598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.901636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.901648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.906023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.906059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.906080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.910491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.910526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.910537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.914847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.914882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.914895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.919208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.919244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.919256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.923519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.923555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.923567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.927896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.927932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.927944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.932170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.932206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.932218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.936480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.936515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.936528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.940797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.940832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.940844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.945117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.945153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.945165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.949435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.949470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.949482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.953703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.953739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.953751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.957966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.958002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.968 [2024-12-05 11:06:22.958014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.968 [2024-12-05 11:06:22.962362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.968 [2024-12-05 11:06:22.962398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.962410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:22.966991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:22.967027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.967040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:22.971423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:22.971459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.971471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:22.975815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:22.975852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.975864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:22.980124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:22.980160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.980172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:22.984484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:22.984521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.984533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:22.988887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:22.988922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.988934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:22.993261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:22.993308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.993320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:22.997641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:22.997678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:22.997690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.002162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.002199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:23.002212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.006593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.006629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:23.006642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.010926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.010971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:23.010984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.015406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.015441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:23.015453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.019845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.019881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:23.019893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.024143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.024178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:23.024190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.028500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.028536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:23.028548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.032836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.032873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.969 [2024-12-05 11:06:23.032885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.969 [2024-12-05 11:06:23.037257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.969 [2024-12-05 11:06:23.037302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.037316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.041616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.041652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.041665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.045965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.046002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.046014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.050351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.050390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.050402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.054750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.054785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.054797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.059124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.059160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.059173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.063456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.063492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.063504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.067911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.067947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.067958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.072348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.072382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.072394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.076649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.076684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.076696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.080934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.080969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.080981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.085251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.085298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.085310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.089568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.089603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.089615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.093849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.093884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.093896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.098221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.098257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.098282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.102562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.102598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.102610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.106904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.106940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.106953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.111234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.970 [2024-12-05 11:06:23.111284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.970 [2024-12-05 11:06:23.111298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:55.970 [2024-12-05 11:06:23.115551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.971 [2024-12-05 11:06:23.115588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.971 [2024-12-05 11:06:23.115600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:55.971 [2024-12-05 11:06:23.119922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.971 [2024-12-05 11:06:23.119961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.971 [2024-12-05 11:06:23.119973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:55.971 [2024-12-05 11:06:23.124259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:55.971 [2024-12-05 11:06:23.124305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:55.971 [2024-12-05 11:06:23.124318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.128610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.128646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.128658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.133000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.133038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.133049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.137345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.137380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.137392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.141704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.141742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.141754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.146070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.146116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.146129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.150642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.150680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.150692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.155012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.155050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.155063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.159431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.159468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.159480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.163787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.163825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.163849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.168224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.168260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.168284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.172531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.172566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.172578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.176851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.176887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.176899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.181319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.181354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.181366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.185668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.185703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.185715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.189993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.190030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.190043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.194409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.194446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.194459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.198698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.198734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.198746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.231 6913.00 IOPS, 864.12 MiB/s [2024-12-05T11:06:23.390Z] [2024-12-05 11:06:23.204442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.231 [2024-12-05 11:06:23.204473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.231 [2024-12-05 11:06:23.204485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.231 [2024-12-05 11:06:23.208762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.208798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.208810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.213099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.213133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.213145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.217422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.217458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.217470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.221701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.221736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.221748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.226000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.226034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.226047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.230455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.230491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.230503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.234798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.234835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.234846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.239224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.239261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.239285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.243599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.243636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.243649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.247986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.248022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.248034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.252402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.252437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.252450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.256702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.256738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.256750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.261018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.261054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.261066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.265319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.265354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.265366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.269719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.269753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.269765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.274145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.274180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.274192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.278493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.278529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.278541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.282867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.282904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.282916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.287300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.287336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.287347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.291563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.291598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.291610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.232 [2024-12-05 11:06:23.295897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.232 [2024-12-05 11:06:23.295932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.232 [2024-12-05 11:06:23.295945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.300309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.300344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.300356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.304608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.304644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.304656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.309057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.309095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.309107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.313473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.313508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.313520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.317999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.318035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.318047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.322511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.322546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.322558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.327090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.327127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.327139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.331591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.331626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.331638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.335953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.335989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.336001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.340335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.340370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.340382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.344677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.344712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.344724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.349009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.349045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.349057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.353405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.353440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.353452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.357652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.357689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.357701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.362014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.362048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.362060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.366441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.366476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.366488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.370845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.370879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.370891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.375168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.375205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.375217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.379526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.379561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.379573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.383788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.383823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.383835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.233 [2024-12-05 11:06:23.388147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.233 [2024-12-05 11:06:23.388182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.233 [2024-12-05 11:06:23.388195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.493 [2024-12-05 11:06:23.392451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.493 [2024-12-05 11:06:23.392485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.493 [2024-12-05 11:06:23.392497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.493 [2024-12-05 11:06:23.396768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.493 [2024-12-05 11:06:23.396803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.493 [2024-12-05 11:06:23.396816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.493 [2024-12-05 11:06:23.401135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.493 [2024-12-05 11:06:23.401170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.493 [2024-12-05 11:06:23.401182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.493 [2024-12-05 11:06:23.405477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.493 [2024-12-05 11:06:23.405513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.493 [2024-12-05 11:06:23.405525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.493 [2024-12-05 11:06:23.409808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.493 [2024-12-05 11:06:23.409842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.493 [2024-12-05 11:06:23.409854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.493 [2024-12-05 11:06:23.414163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.493 [2024-12-05 11:06:23.414197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.493 [2024-12-05 11:06:23.414209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.493 [2024-12-05 11:06:23.418428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.418463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.418475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.422790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.422825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.422837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.427146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.427182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.427194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.431496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.431536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.431548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.435831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.435868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.435880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.440268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.440313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.440325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.444611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.444647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.444659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.449004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.449040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.449052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.453398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.453437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.453449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.457759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.457796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.457808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.462052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.462096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.462108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.466488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.466523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.466535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.470831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.470868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.470880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.475142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.475178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.475190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.479492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.479528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.479539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.483876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.483911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.483923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.488182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.488219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.494 [2024-12-05 11:06:23.488230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.494 [2024-12-05 11:06:23.492569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.494 [2024-12-05 11:06:23.492604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.492617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.496893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.496928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.496940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.501211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.501248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.501259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.505552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.505587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.505599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.509874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.509909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.509921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.514247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.514293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.514307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.518595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.518631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.518643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.522928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.522962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.522974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.527252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.527299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.527311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.531603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.531638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.531650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.535929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.535965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.535977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.540260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.540306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.540318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.544588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.544624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.544636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.548932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.548968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.548980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.553299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.553333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.553345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.557672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.557707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.557719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.562000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.562036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.562047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.566332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.566366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.495 [2024-12-05 11:06:23.566378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.495 [2024-12-05 11:06:23.570702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.495 [2024-12-05 11:06:23.570738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.570749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.575149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.575192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.575205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.579560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.579595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.579608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.583987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.584022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.584034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.588336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.588370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.588383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.592669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.592705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.592718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.597038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.597073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.597085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.601419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.601454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.601466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.605686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.605721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.605732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.610149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.610185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.610197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.614645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.614680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.614692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.619070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.619106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.619118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.623458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.623495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.623508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.627793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.627827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.627840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.632155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.632191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.632203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.636509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.636544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.636557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.640934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.640970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.640982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.645331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.645367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.496 [2024-12-05 11:06:23.645379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.496 [2024-12-05 11:06:23.649758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.496 [2024-12-05 11:06:23.649794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.497 [2024-12-05 11:06:23.649806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.654191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.654226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.654239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.658698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.658734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.658746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.663234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.663290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.663303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.667730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.667766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.667778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.672176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.672212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.672224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.676584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.676618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.676631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.680934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.680970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.680982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.685338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.685373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.685386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.689705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.689740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.689752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.694117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.694151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.694163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.698495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.698530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.698542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.702844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.702879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.702891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.707306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.707340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.707352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.711683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.711718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.711730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.716120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.716157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.716169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.720577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.720612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.720625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.725010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.725045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.725058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.729401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.729435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.729447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.733682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.733718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.733730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.738108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.738141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.757 [2024-12-05 11:06:23.738153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.757 [2024-12-05 11:06:23.742513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.757 [2024-12-05 11:06:23.742548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.742560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.746845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.746879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.746891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.751231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.751267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.751294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.755529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.755565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.755577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.759869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.759906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.759918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.764148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.764183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.764195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.768466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.768501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.768514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.772832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.772868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.772881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.777143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.777180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.777192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.781481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.781517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.781529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.785838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.785872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.785884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.790218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.790254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.790266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.794530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.794566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.794578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.798873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.798909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.798921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.803249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.803304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.803316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.807623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.807659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.807671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.811925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.811961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.811973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.816385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.816419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.816431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.820710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.820747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.820759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.825045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.825081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.825094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.829457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.829494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.829506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.833884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.833921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.833933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.758 [2024-12-05 11:06:23.838310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.758 [2024-12-05 11:06:23.838346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.758 [2024-12-05 11:06:23.838359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.842633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.842670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.842682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.846994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.847031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.847043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.851344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.851381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.851393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.855691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.855725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.855737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.860073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.860111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.860123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.864411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.864446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.864458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.868744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.868780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.868792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.873036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.873071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.873083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.877372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.877407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.877420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.881742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.881774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.881786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.886156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.886192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.886205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.890570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.890606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.890618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.894935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.894971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.894983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.899296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.899331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.899343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.903601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.903635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.903647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.907896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.907931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.907943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:56.759 [2024-12-05 11:06:23.912176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:56.759 [2024-12-05 11:06:23.912212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.759 [2024-12-05 11:06:23.912224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.916531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.916566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.916578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.920829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.920863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.920875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.925211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.925246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.925258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.929553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.929589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.929601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.933980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.934016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.934028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.938311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.938345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.938357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.942611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.942649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.942661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.946888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.946925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.946937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.951265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.951312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.951324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.955599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.955633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.955645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.959964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.959999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.960010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.964308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.964341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.964354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.968628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.968663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.968675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.972835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.972871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.972883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.977114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.977149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.977161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.981462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.981497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.981509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.985756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.021 [2024-12-05 11:06:23.985790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.021 [2024-12-05 11:06:23.985802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.021 [2024-12-05 11:06:23.990106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:23.990140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:23.990152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:23.994449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:23.994485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:23.994497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:23.998764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:23.998799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:23.998811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.003085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.003122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.003134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.007417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.007452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.007464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.011730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.011764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.011776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.016015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.016050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.016062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.020321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.020353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.020366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.024665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.024699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.024711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.029001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.029037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.029049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.033390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.033425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.033438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.037709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.037744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.037755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.042070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.042113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.042125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.046414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.046450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.046463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.050700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.050736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.050748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.055091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.055126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.055138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.059435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.059471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.059483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.063826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.063860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.063872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.068103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.068138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.068150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.072579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.072615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.072627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.077039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.077075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.077087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.022 [2024-12-05 11:06:24.081428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.022 [2024-12-05 11:06:24.081462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.022 [2024-12-05 11:06:24.081474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.085761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.085796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.085808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.090165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.090201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.090213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.094577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.094612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.094624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.098963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.098999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.099011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.103328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.103364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.103376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.107701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.107737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.107749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.112143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.112178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.112190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.116500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.116536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.116549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.120810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.120845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.120857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.125087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.125124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.125136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.129386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.129421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.129433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.133694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.133729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.133741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.138064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.138108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.138120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.142452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.142488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.142500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.146785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.146821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.146833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.151152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.151186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.151197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.155447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.155483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.155495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.159748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.159788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.159801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.164076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.164113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.164125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.168458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.168493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.168505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.172825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.172860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.023 [2024-12-05 11:06:24.177324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.023 [2024-12-05 11:06:24.177357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.023 [2024-12-05 11:06:24.177370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.283 [2024-12-05 11:06:24.181797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.283 [2024-12-05 11:06:24.181833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.283 [2024-12-05 11:06:24.181844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.283 [2024-12-05 11:06:24.186253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.283 [2024-12-05 11:06:24.186298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.283 [2024-12-05 11:06:24.186310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:57.283 [2024-12-05 11:06:24.190666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.283 [2024-12-05 11:06:24.190701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.283 [2024-12-05 11:06:24.190713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.283 [2024-12-05 11:06:24.195116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.283 [2024-12-05 11:06:24.195152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.283 [2024-12-05 11:06:24.195164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:57.283 [2024-12-05 11:06:24.199609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17209b0) 00:22:57.283 [2024-12-05 11:06:24.199644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.283 [2024-12-05 11:06:24.199657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:57.283 6998.00 IOPS, 874.75 MiB/s 00:22:57.283 Latency(us) 00:22:57.283 [2024-12-05T11:06:24.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.283 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:57.283 nvme0n1 : 2.00 6996.87 874.61 0.00 0.00 2283.85 2052.93 11106.90 00:22:57.283 [2024-12-05T11:06:24.442Z] =================================================================================================================== 00:22:57.283 [2024-12-05T11:06:24.442Z] Total : 6996.87 874.61 0.00 0.00 2283.85 2052.93 11106.90 00:22:57.283 { 00:22:57.283 "results": [ 00:22:57.283 { 00:22:57.283 "job": "nvme0n1", 00:22:57.283 "core_mask": "0x2", 00:22:57.283 "workload": "randread", 00:22:57.283 "status": "finished", 00:22:57.283 "queue_depth": 16, 00:22:57.283 "io_size": 131072, 00:22:57.283 "runtime": 2.002611, 00:22:57.283 "iops": 6996.865591969684, 00:22:57.283 "mibps": 874.6081989962105, 00:22:57.283 "io_failed": 0, 00:22:57.283 "io_timeout": 0, 00:22:57.284 "avg_latency_us": 2283.8501112643553, 00:22:57.284 "min_latency_us": 2052.9349397590363, 00:22:57.284 "max_latency_us": 11106.904417670683 00:22:57.284 } 00:22:57.284 ], 00:22:57.284 "core_count": 1 00:22:57.284 } 00:22:57.284 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:57.284 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:57.284 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:57.284 | .driver_specific 00:22:57.284 | .nvme_error 00:22:57.284 | .status_code 00:22:57.284 | .command_transient_transport_error' 00:22:57.284 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 452 > 0 )) 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80540 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80540 ']' 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80540 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80540 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:57.543 killing process with pid 80540 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80540' 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80540 00:22:57.543 Received shutdown signal, test time was about 2.000000 seconds 00:22:57.543 00:22:57.543 Latency(us) 00:22:57.543 [2024-12-05T11:06:24.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.543 [2024-12-05T11:06:24.702Z] =================================================================================================================== 00:22:57.543 [2024-12-05T11:06:24.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.543 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80540 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80601 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80601 /var/tmp/bperf.sock 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80601 ']' 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.802 11:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:57.802 [2024-12-05 11:06:24.835744] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:22:57.802 [2024-12-05 11:06:24.835856] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80601 ] 00:22:58.061 [2024-12-05 11:06:24.976026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.061 [2024-12-05 11:06:25.028205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.061 [2024-12-05 11:06:25.070198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:58.627 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.627 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:58.627 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:58.627 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:58.886 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:58.886 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.886 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:58.886 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.886 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.886 11:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:59.144 nvme0n1 00:22:59.144 11:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:59.144 11:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.144 11:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:59.144 11:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.144 11:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:59.144 11:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:59.404 Running I/O for 2 seconds... 00:22:59.404 [2024-12-05 11:06:26.392725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ef7100 00:22:59.404 [2024-12-05 11:06:26.393978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.404 [2024-12-05 11:06:26.394021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:59.404 [2024-12-05 11:06:26.405508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ef7970 00:22:59.404 [2024-12-05 11:06:26.406848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.404 [2024-12-05 11:06:26.406886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.404 [2024-12-05 11:06:26.418147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ef81e0 00:22:59.404 [2024-12-05 11:06:26.419358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.404 [2024-12-05 11:06:26.419396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:59.404 [2024-12-05 11:06:26.430668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ef8a50 00:22:59.404 [2024-12-05 11:06:26.431943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.404 [2024-12-05 11:06:26.431979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:59.404 [2024-12-05 11:06:26.443090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ef92c0 00:22:59.405 [2024-12-05 11:06:26.444289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.444336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.455540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ef9b30 00:22:59.405 [2024-12-05 11:06:26.456741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.456776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.468013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efa3a0 00:22:59.405 [2024-12-05 11:06:26.469147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.469182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.480517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efac10 00:22:59.405 [2024-12-05 11:06:26.481707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.481741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.492870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efb480 00:22:59.405 [2024-12-05 11:06:26.493974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.494008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.505239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efbcf0 00:22:59.405 [2024-12-05 11:06:26.506343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.506375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.517555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efc560 00:22:59.405 [2024-12-05 11:06:26.518684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.518717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.530044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efcdd0 00:22:59.405 [2024-12-05 11:06:26.531179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.531214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.542847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efd640 00:22:59.405 [2024-12-05 11:06:26.543892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.543925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:59.405 [2024-12-05 11:06:26.555301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efdeb0 00:22:59.405 [2024-12-05 11:06:26.556411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.405 [2024-12-05 11:06:26.556445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.567851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efe720 00:22:59.666 [2024-12-05 11:06:26.568919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.568954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.580524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:22:59.666 [2024-12-05 11:06:26.581540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.581572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.598037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:22:59.666 [2024-12-05 11:06:26.600084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.600117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.610432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efe720 00:22:59.666 [2024-12-05 11:06:26.612452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.612482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.622882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efdeb0 00:22:59.666 [2024-12-05 11:06:26.624800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.624833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.635189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016efd640 00:22:59.666 [2024-12-05 11:06:26.637153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.637183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.643687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.643810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.643832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.653332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.653480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.653500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.662926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.663064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.663087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.672532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.672660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.672681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.682156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.682293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.682314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.691861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.691979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.692000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.701618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.701742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.701762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.711332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.711456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.711476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.720871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.720987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.721007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.730397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.666 [2024-12-05 11:06:26.730523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.666 [2024-12-05 11:06:26.730544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.666 [2024-12-05 11:06:26.740096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.740217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.740236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.667 [2024-12-05 11:06:26.749662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.749779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.749798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.667 [2024-12-05 11:06:26.759622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.759747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.759770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.667 [2024-12-05 11:06:26.769416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.769544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.769564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.667 [2024-12-05 11:06:26.778918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.779034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.779055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.667 [2024-12-05 11:06:26.788472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.788603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.788622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.667 [2024-12-05 11:06:26.798059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.798195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.798216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.667 [2024-12-05 11:06:26.807769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.807887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.807907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.667 [2024-12-05 11:06:26.817355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.667 [2024-12-05 11:06:26.817482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.667 [2024-12-05 11:06:26.817501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.827001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.827135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.827156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.836787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.836907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.836927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.846351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.846471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.846491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.856055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.856191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.856222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.865683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.865811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.865836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.875627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.875772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.875798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.885465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.885596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.885619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.895518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.895659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.895683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.905583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.905717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.905739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.915731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.915867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.915889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.925755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.925881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.925903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.935949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.936084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.936108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.945595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.945719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.945741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.955283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.955405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.955426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.964781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.964900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.964920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.974373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.974495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.974515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.984030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.984152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.984172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:26.993747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:26.993863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:26.993883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:27.003438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:27.003587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:27.003607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:27.013013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:27.013134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:27.013154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:27.022449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.928 [2024-12-05 11:06:27.022569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.928 [2024-12-05 11:06:27.022588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.928 [2024-12-05 11:06:27.031871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.929 [2024-12-05 11:06:27.031990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.929 [2024-12-05 11:06:27.032009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.929 [2024-12-05 11:06:27.041328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.929 [2024-12-05 11:06:27.041448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.929 [2024-12-05 11:06:27.041468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.929 [2024-12-05 11:06:27.051108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.929 [2024-12-05 11:06:27.051250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.929 [2024-12-05 11:06:27.051269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.929 [2024-12-05 11:06:27.060954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.929 [2024-12-05 11:06:27.061096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.929 [2024-12-05 11:06:27.061116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.929 [2024-12-05 11:06:27.070632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.929 [2024-12-05 11:06:27.070752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.929 [2024-12-05 11:06:27.070770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:59.929 [2024-12-05 11:06:27.080537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:22:59.929 [2024-12-05 11:06:27.080660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.929 [2024-12-05 11:06:27.080679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.090194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.090325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.090344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.099896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.100014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.100034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.109898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.110019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.110039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.119734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.119857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.119877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.129298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.129413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.129432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.138827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.138946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.138965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.148491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.148621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.148639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.157950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.158094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.158114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.167503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.167629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.167648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.177551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.177697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.177718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.187715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.187844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.187865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.197472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.197595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.197615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.206997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.207145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.207165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.216652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.216769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.216789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.226374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.226512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.226532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.236171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.236300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.236320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.245777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.245890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.245909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.255462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.255581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.255599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.265131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.265292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.265321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.274876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.275007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.275031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.284711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.284838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.284860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.294574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.294711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.294732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.304369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.304491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.304511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.189 [2024-12-05 11:06:27.314503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.189 [2024-12-05 11:06:27.314653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.189 [2024-12-05 11:06:27.314674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.190 [2024-12-05 11:06:27.324327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.190 [2024-12-05 11:06:27.324445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.190 [2024-12-05 11:06:27.324464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.190 [2024-12-05 11:06:27.334124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.190 [2024-12-05 11:06:27.334265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.190 [2024-12-05 11:06:27.334299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.190 [2024-12-05 11:06:27.343953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.190 [2024-12-05 11:06:27.344085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.190 [2024-12-05 11:06:27.344105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.449 [2024-12-05 11:06:27.353815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.449 [2024-12-05 11:06:27.353936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.449 [2024-12-05 11:06:27.353957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.449 [2024-12-05 11:06:27.363610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.449 [2024-12-05 11:06:27.363730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.449 [2024-12-05 11:06:27.363751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.449 [2024-12-05 11:06:27.373256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.449 [2024-12-05 11:06:27.374679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.374714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 24443.00 IOPS, 95.48 MiB/s [2024-12-05T11:06:27.609Z] [2024-12-05 11:06:27.382913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.383046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.383066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.392438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.392703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.392722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.402158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.402286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.402306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.411707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.411962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.411982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.421343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.421463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.421482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.430986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.431236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.431256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.440727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.440850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.440870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.450295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.450416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.450436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.459700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.459816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.459836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.469193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.469332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.469351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.478690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.478805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.478824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.488107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.488226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.488247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.497671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.497950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.497970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.507548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.507667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.507686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.517261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.517392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.517412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.526887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.527009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.527029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.536430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.536684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.536703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.546162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.546295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.546316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.555638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.450 [2024-12-05 11:06:27.555876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.450 [2024-12-05 11:06:27.555895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.450 [2024-12-05 11:06:27.565199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.451 [2024-12-05 11:06:27.565339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.451 [2024-12-05 11:06:27.565358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.451 [2024-12-05 11:06:27.574717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.451 [2024-12-05 11:06:27.574953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.451 [2024-12-05 11:06:27.574972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.451 [2024-12-05 11:06:27.584337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.451 [2024-12-05 11:06:27.584453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.451 [2024-12-05 11:06:27.584472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.451 [2024-12-05 11:06:27.593929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.451 [2024-12-05 11:06:27.594170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.451 [2024-12-05 11:06:27.594190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.451 [2024-12-05 11:06:27.603591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.451 [2024-12-05 11:06:27.603710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.451 [2024-12-05 11:06:27.603728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.613194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.710 [2024-12-05 11:06:27.613451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.710 [2024-12-05 11:06:27.613470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.623059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.710 [2024-12-05 11:06:27.623185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.710 [2024-12-05 11:06:27.623203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.632576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.710 [2024-12-05 11:06:27.632820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.710 [2024-12-05 11:06:27.632840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.642415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.710 [2024-12-05 11:06:27.642543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.710 [2024-12-05 11:06:27.642564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.651879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.710 [2024-12-05 11:06:27.652116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.710 [2024-12-05 11:06:27.652135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.661510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.710 [2024-12-05 11:06:27.661627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.710 [2024-12-05 11:06:27.661647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.671058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.710 [2024-12-05 11:06:27.671348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.710 [2024-12-05 11:06:27.671368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.680740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.710 [2024-12-05 11:06:27.680863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.710 [2024-12-05 11:06:27.680883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.710 [2024-12-05 11:06:27.690573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.690838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.690858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.700237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.700375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.700395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.709769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.710008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.710027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.719482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.719601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.719620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.729006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.729272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.729305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.739210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.739368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.739399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.748951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.749068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.749092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.758774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.758905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.758926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.768380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.768505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.768528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.777928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.778060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.778095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.787574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.787690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.787710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.797127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.797253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.797282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.806669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.806786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.806808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.816122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.816240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.816261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.825817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.825938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.825960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.835539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.835658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.835677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.845096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.845219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.845240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.854653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.854781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.854800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.711 [2024-12-05 11:06:27.864148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.711 [2024-12-05 11:06:27.864286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.711 [2024-12-05 11:06:27.864306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.973 [2024-12-05 11:06:27.873712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.973 [2024-12-05 11:06:27.873832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.973 [2024-12-05 11:06:27.873854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.973 [2024-12-05 11:06:27.883435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.973 [2024-12-05 11:06:27.883560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.973 [2024-12-05 11:06:27.883580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.973 [2024-12-05 11:06:27.892876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.973 [2024-12-05 11:06:27.893003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.973 [2024-12-05 11:06:27.893022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.902379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.902509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.902527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.911794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.911918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.911937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.921361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.921479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.921498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.930944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.931069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.931089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.940815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.940973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.940997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.950794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.950930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.950961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.960735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.960862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.960894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.970737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.970875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.970897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.980554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.980674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.980694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.990052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.990183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.990204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:27.999714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:27.999836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:27.999856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.009288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.009413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.009435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.019261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.019423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.019456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.029228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.029367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.029387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.039315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.039479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.039502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.049590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.049721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.049744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.059623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.059741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.059761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.069292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.069411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.069432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.078809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.078925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.078944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.088219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.088350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.088371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.097702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.097819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.097838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.107124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.107240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.107259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.116639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.116763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.116784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:00.974 [2024-12-05 11:06:28.126044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:00.974 [2024-12-05 11:06:28.126172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.974 [2024-12-05 11:06:28.126193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.136223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.136364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.136393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.145712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.145835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.145857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.155382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.155503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.155524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.164860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.164977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.164997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.174281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.174402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.174422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.183775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.183891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.183911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.193184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.193310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.193329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.202686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.202804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.202823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.212117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.212235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.212254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.221716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.221837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.221857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.231152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.231290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.240627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.240750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.240771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.250097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.250216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.250235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.259672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.259793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.259813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.269208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.269340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.269359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.278994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.279144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.279176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.288720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.288847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.288866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.298499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.298623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.251 [2024-12-05 11:06:28.298643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.251 [2024-12-05 11:06:28.308364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.251 [2024-12-05 11:06:28.308482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.252 [2024-12-05 11:06:28.308507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.252 [2024-12-05 11:06:28.318123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.252 [2024-12-05 11:06:28.318253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.252 [2024-12-05 11:06:28.318291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.252 [2024-12-05 11:06:28.327956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.252 [2024-12-05 11:06:28.328078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.252 [2024-12-05 11:06:28.328101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.252 [2024-12-05 11:06:28.337453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.252 [2024-12-05 11:06:28.337574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.252 [2024-12-05 11:06:28.337595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.252 [2024-12-05 11:06:28.346975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.252 [2024-12-05 11:06:28.347094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.252 [2024-12-05 11:06:28.347114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.252 [2024-12-05 11:06:28.356461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.252 [2024-12-05 11:06:28.356583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.252 [2024-12-05 11:06:28.356604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.252 [2024-12-05 11:06:28.365926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.252 [2024-12-05 11:06:28.366049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.252 [2024-12-05 11:06:28.366069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.252 25433.50 IOPS, 99.35 MiB/s [2024-12-05T11:06:28.411Z] [2024-12-05 11:06:28.375543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016ede470 00:23:01.252 [2024-12-05 11:06:28.375661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:01.252 [2024-12-05 11:06:28.375681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:01.252 00:23:01.252 Latency(us) 00:23:01.252 [2024-12-05T11:06:28.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.252 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:01.252 nvme0n1 : 2.01 25435.79 99.36 0.00 0.00 5023.07 2368.77 18739.61 00:23:01.252 [2024-12-05T11:06:28.411Z] =================================================================================================================== 00:23:01.252 [2024-12-05T11:06:28.411Z] Total : 25435.79 99.36 0.00 0.00 5023.07 2368.77 18739.61 00:23:01.252 { 00:23:01.252 "results": [ 00:23:01.252 { 00:23:01.252 "job": "nvme0n1", 00:23:01.252 "core_mask": "0x2", 00:23:01.252 "workload": "randwrite", 00:23:01.252 "status": "finished", 00:23:01.252 "queue_depth": 128, 00:23:01.252 "io_size": 4096, 00:23:01.252 "runtime": 2.006739, 00:23:01.252 "iops": 25435.794091807653, 00:23:01.252 "mibps": 99.35857067112364, 00:23:01.252 "io_failed": 0, 00:23:01.252 "io_timeout": 0, 00:23:01.252 "avg_latency_us": 5023.067214499909, 00:23:01.252 "min_latency_us": 2368.7710843373493, 00:23:01.252 "max_latency_us": 18739.61124497992 00:23:01.252 } 00:23:01.252 ], 00:23:01.252 "core_count": 1 00:23:01.252 } 00:23:01.252 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:01.252 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:01.252 | .driver_specific 00:23:01.252 | .nvme_error 00:23:01.252 | .status_code 00:23:01.252 | .command_transient_transport_error' 00:23:01.252 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:01.252 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:01.510 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:23:01.510 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80601 00:23:01.510 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80601 ']' 00:23:01.510 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80601 00:23:01.510 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:23:01.510 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.510 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80601 00:23:01.768 killing process with pid 80601 00:23:01.768 Received shutdown signal, test time was about 2.000000 seconds 00:23:01.768 00:23:01.768 Latency(us) 00:23:01.768 [2024-12-05T11:06:28.927Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.768 [2024-12-05T11:06:28.927Z] =================================================================================================================== 00:23:01.768 [2024-12-05T11:06:28.927Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80601' 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80601 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80601 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80661 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80661 /var/tmp/bperf.sock 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80661 ']' 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:01.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.768 11:06:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:01.768 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:01.768 Zero copy mechanism will not be used. 00:23:01.769 [2024-12-05 11:06:28.898032] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:23:01.769 [2024-12-05 11:06:28.898122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80661 ] 00:23:02.026 [2024-12-05 11:06:29.050912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.026 [2024-12-05 11:06:29.105169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.026 [2024-12-05 11:06:29.148485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:02.988 11:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.988 11:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:23:02.988 11:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:02.988 11:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:02.988 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:02.988 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.988 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:02.988 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.988 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.988 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:03.245 nvme0n1 00:23:03.245 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:03.245 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.245 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:03.245 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.245 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:03.245 11:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:03.505 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:03.505 Zero copy mechanism will not be used. 00:23:03.505 Running I/O for 2 seconds... 00:23:03.505 [2024-12-05 11:06:30.503530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.503632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.503663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.507188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.507463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.507494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.510769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.510829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.510849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.514457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.514512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.514533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.518131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.518184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.518205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.521801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.521881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.521901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.525482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.525535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.525555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.529149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.529218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.529238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.532825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.532896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.532916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.536522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.536687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.536707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.540364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.540513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.540532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.543750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.544013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.544033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.547256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.547323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.547343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.550910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.550969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.550990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.554597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.554649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.554669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.558223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.558317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.558337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.561845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.561934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.561954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.565460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.565589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.565608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.569049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.569121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.569141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.572686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.572743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.572762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.575944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.576263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.576302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.579474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.579548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.579569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.583094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.583144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.583164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.586757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.586819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.586839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.590391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.590451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.590470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.593990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.594060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.594089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.597731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.597794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.597812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.601446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.601551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.601571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.605082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.605215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.605234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.608422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.608674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.608695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.611871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.611927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.505 [2024-12-05 11:06:30.611946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.505 [2024-12-05 11:06:30.615505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.505 [2024-12-05 11:06:30.615577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.615597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.619143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.619216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.619236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.622808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.622861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.622880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.626397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.626499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.626519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.630107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.630198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.630218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.633838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.633931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.633951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.637521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.637592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.637612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.641149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.641200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.641219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.644409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.644734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.644765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.647914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.647988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.648007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.651490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.651539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.651559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.655132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.655190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.655209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.658749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.658828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.658847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.506 [2024-12-05 11:06:30.662471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.506 [2024-12-05 11:06:30.662560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.506 [2024-12-05 11:06:30.662580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.666125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.666236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.666256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.669740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.669868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.669888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.673026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.673287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.673307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.676563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.676618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.676638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.680118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.680180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.680200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.683757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.683828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.683848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.687416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.687486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.687506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.691000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.691059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.691078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.694799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.694900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.694922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.698498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.698569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.698589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.702225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.702324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.702345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.705953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.706128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.706148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.709329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.709593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.709612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.712862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.712921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.712940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.716477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.716531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.716550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.720080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.766 [2024-12-05 11:06:30.720154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.766 [2024-12-05 11:06:30.720174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.766 [2024-12-05 11:06:30.723737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.723804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.723824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.727373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.727438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.727458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.730978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.731089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.731109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.735240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.735335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.735355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.738886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.739028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.739047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.742187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.742457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.742495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.745709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.745760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.745780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.749551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.749625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.749645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.753208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.753265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.753296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.756945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.757013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.757032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.760635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.760711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.760731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.764325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.764393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.764413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.767998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.768156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.768185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.771659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.771730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.775338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.775395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.775415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.778555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.778889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.778917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.782111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.782192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.782212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.785786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.785838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.785858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.789403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.789453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.789473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.793086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.793145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.793165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.796747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.796821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.796842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.800426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.800540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.767 [2024-12-05 11:06:30.800561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.767 [2024-12-05 11:06:30.804069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.767 [2024-12-05 11:06:30.804187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.804206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.807360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.807588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.807608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.810803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.810857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.810876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.814456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.814507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.814528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.818061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.818138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.818159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.821761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.821817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.821837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.825466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.825532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.825552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.829071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.829188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.829208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.832717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.832785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.832805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.836392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.836446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.836466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.839642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.839967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.839986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.843146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.843216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.843236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.846796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.846850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.846870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.850521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.850579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.850599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.854206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.854260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.854294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.858068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.858128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.858148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.861681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.861731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.861750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.865350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.865449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.865468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.868970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.869091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.869111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.872283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.872521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.872541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.875755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.875805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.875824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.879347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.879394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.879414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.768 [2024-12-05 11:06:30.882970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.768 [2024-12-05 11:06:30.883044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.768 [2024-12-05 11:06:30.883064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.886721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.886773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.886793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.890447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.890502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.890521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.894063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.894145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.894165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.897692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.897767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.897787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.901387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.901543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.901563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.904743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.905011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.905037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.908264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.908329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.908349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.911871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.911922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.911942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.915523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.915575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.915595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.919190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.919259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.919292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:03.769 [2024-12-05 11:06:30.922809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:03.769 [2024-12-05 11:06:30.922905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.769 [2024-12-05 11:06:30.922925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.926470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.926608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.926628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.930046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.930112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.930133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.933660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.933718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.933738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.936884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.937195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.937214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.940398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.940468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.940487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.944044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.944100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.944119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.947784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.947834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.947854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.951406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.951483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.955118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.955187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.955206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.958773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.958859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.958879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.962382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.962497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.962516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.965661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.965886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.965905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.969146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.969198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.969218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.972800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.972857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.029 [2024-12-05 11:06:30.972876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.029 [2024-12-05 11:06:30.976472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.029 [2024-12-05 11:06:30.976523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:30.976543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:30.980106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:30.980178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:30.980197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:30.983747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:30.983797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:30.983817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:30.987443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:30.987569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:30.987590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:30.991055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:30.991129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:30.991148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:30.994685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:30.994733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:30.994753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:30.997994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:30.998337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:30.998356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.001579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.001650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.001670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.005209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.005264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.005296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.008859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.008908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.008928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.012516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.012568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.012588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.016203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.016304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.019802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.019870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.019889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.023459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.023569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.023591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.026747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.026987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.027013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.030201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.030254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.030285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.034051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.034112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.034132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.037828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.030 [2024-12-05 11:06:31.037882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.030 [2024-12-05 11:06:31.037902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.030 [2024-12-05 11:06:31.041558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.041651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.041672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.045244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.045308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.045329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.048984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.049092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.049113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.052659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.052787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.052807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.055956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.056199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.056225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.059453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.059508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.059528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.063193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.063245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.063265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.066891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.066943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.066963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.070497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.070556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.070576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.074056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.074126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.074146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.077646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.077782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.077801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.081284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.081359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.081378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.084959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.085009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.085029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.088254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.088587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.088611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.091782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.091854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.091874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.095413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.095464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.095484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.098996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.099062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.099082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.102619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.102676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.102696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.031 [2024-12-05 11:06:31.106313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.031 [2024-12-05 11:06:31.106387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.031 [2024-12-05 11:06:31.106408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.109992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.110093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.110113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.113651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.113774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.113793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.117484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.117628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.117648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.121391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.121540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.121559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.124838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.125092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.125112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.128477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.128530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.128550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.132125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.132175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.132195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.135882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.135934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.135954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.139522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.139576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.139597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.143172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.143247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.143267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.146771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.146897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.146916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.150377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.150450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.150470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.154144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.154296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.154316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.157607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.157868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.157888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.161242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.161307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.161327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.164960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.165010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.165030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.168650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.168698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.168719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.032 [2024-12-05 11:06:31.172345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.032 [2024-12-05 11:06:31.172403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.032 [2024-12-05 11:06:31.172424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.033 [2024-12-05 11:06:31.176061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.033 [2024-12-05 11:06:31.176124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.033 [2024-12-05 11:06:31.176144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.033 [2024-12-05 11:06:31.179718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.033 [2024-12-05 11:06:31.179769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.033 [2024-12-05 11:06:31.179789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.033 [2024-12-05 11:06:31.183617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.033 [2024-12-05 11:06:31.183717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.033 [2024-12-05 11:06:31.183738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.033 [2024-12-05 11:06:31.187469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.033 [2024-12-05 11:06:31.187602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.033 [2024-12-05 11:06:31.187622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.190904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.191157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.191183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.194435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.194490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.194510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.198130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.198190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.198211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.201873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.201923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.201943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.205620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.205671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.205692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.209369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.209417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.209437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.213089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.213157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.213177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.216873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.216947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.216966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.220829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.220886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.220907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.224363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.224687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.224707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.228004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.228074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.228094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.293 [2024-12-05 11:06:31.231697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.293 [2024-12-05 11:06:31.231745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.293 [2024-12-05 11:06:31.231766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.235487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.235553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.235573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.239131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.239203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.239223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.243008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.243068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.243087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.246834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.246893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.246913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.250484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.250547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.250567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.254240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.254329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.254349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.257937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.257989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.258008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.261316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.261640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.261666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.264975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.265047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.265067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.268735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.268796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.268816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.272545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.272602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.272622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.276263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.276334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.276353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.279987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.280055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.280075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.283767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.283821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.283841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.287473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.287610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.287629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.290872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.291147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.291174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.294440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.294492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.294511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.298096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.298145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.298165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.301918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.301970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.301989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.305744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.305795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.305815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.309567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.309616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.309636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.313269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.294 [2024-12-05 11:06:31.313357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.294 [2024-12-05 11:06:31.313376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.294 [2024-12-05 11:06:31.316994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.317074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.317093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.320732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.320791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.320811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.324525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.324607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.324627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.328205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.328352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.328372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.331491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.331715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.331740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.334963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.335016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.335036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.338934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.339005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.339024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.342595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.342652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.342671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.346296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.346346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.346366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.349885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.349935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.349955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.353511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.353561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.353581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.357136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.357212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.357232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.360765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.360831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.360850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.364479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.364643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.364668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.367802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.368063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.368087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.371268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.371331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.371351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.374908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.374958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.374978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.378555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.378605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.378625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.382177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.382228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.382248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.385849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.385899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.385919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.389602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.389665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.389685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.393306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.393390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.295 [2024-12-05 11:06:31.393410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.295 [2024-12-05 11:06:31.397023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.295 [2024-12-05 11:06:31.397074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.397094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.400345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.400657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.400683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.403935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.404004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.404024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.407728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.407784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.407804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.411377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.411436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.411456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.415023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.415081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.415101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.418672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.418729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.418766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.422357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.422439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.422458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.426026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.426163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.426183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.429386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.429637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.429656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.432936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.432985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.433004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.436617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.436664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.436684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.440312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.440364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.440385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.443931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.443990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.444010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.447584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.447677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.447697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.296 [2024-12-05 11:06:31.451206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.296 [2024-12-05 11:06:31.451265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.296 [2024-12-05 11:06:31.451297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.556 [2024-12-05 11:06:31.454814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.556 [2024-12-05 11:06:31.454867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.556 [2024-12-05 11:06:31.454887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.556 [2024-12-05 11:06:31.458482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.556 [2024-12-05 11:06:31.458529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.556 [2024-12-05 11:06:31.458549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.461772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.462105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.462128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.465332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.465403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.465422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.469034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.469086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.469106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.472786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.472836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.472857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.476610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.476665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.476684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.480482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.480555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.480574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.484154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.484316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.484335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.487950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.488078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.488097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.491486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.491725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.491745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.557 8447.00 IOPS, 1055.88 MiB/s [2024-12-05T11:06:31.716Z] [2024-12-05 11:06:31.496375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.496442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.496475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.500142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.500196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.500216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.503883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.503943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.503965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.507614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.507667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.507687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.511265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.511323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.511343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.514930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.514995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.515015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.518758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.518824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.518844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.522407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.522455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.522475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.525646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.525973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.525998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.529224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.529311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.529330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.532875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.532924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.532944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.536517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.536571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.536591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.557 [2024-12-05 11:06:31.540197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.557 [2024-12-05 11:06:31.540254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.557 [2024-12-05 11:06:31.540286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.543903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.543958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.543979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.547652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.547740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.547760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.551342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.551469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.551490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.554641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.554873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.554892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.558101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.558148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.558168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.561715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.561761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.561781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.565413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.565465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.565484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.569082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.569151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.569171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.572676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.572730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.572750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.576294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.576353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.576373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.579889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.579958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.579977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.583538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.583593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.583612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.586821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.587149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.587175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.590360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.590431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.590451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.593951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.594001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.594021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.597682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.597735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.597755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.601307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.601368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.601388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.604887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.604941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.604961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.608475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.608593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.608613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.612069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.612134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.612153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.615751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.615805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.615825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.558 [2024-12-05 11:06:31.618958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.558 [2024-12-05 11:06:31.619291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.558 [2024-12-05 11:06:31.619310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.622501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.622573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.622592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.626066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.626121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.626140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.629699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.629763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.629783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.633265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.633329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.633349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.637058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.637117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.637136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.640730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.640806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.640826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.644336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.644486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.644506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.647929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.648078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.648099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.651199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.651453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.651472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.654681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.654732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.654752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.658300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.658352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.658371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.661929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.661979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.661999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.665603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.665669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.669205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.669268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.669299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.672850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.672902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.672922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.676514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.676583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.676603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.680167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.680221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.680241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.683393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.683705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.683730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.686919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.686992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.687011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.690549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.690604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.690624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.694145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.694194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.694214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.697794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.697850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.559 [2024-12-05 11:06:31.697870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.559 [2024-12-05 11:06:31.701612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.559 [2024-12-05 11:06:31.701674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.560 [2024-12-05 11:06:31.701693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.560 [2024-12-05 11:06:31.705346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.560 [2024-12-05 11:06:31.705402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.560 [2024-12-05 11:06:31.705422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.560 [2024-12-05 11:06:31.709128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.560 [2024-12-05 11:06:31.709248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.560 [2024-12-05 11:06:31.709267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.560 [2024-12-05 11:06:31.712920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.560 [2024-12-05 11:06:31.713041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.560 [2024-12-05 11:06:31.713060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.716438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.716676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.716697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.720062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.720115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.720134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.723876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.723927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.723946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.727759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.727812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.727831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.731470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.731529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.731548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.735251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.735361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.735393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.739134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.739227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.739247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.743305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.743371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.743390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.747214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.747349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.747369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.750663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.750902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.750924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.754249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.754318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.754338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.757961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.758013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.758032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.761689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.761741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.881 [2024-12-05 11:06:31.761761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.881 [2024-12-05 11:06:31.765521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.881 [2024-12-05 11:06:31.765587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.765607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.769236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.769323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.769343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.772962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.773014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.773034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.776726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.776791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.776811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.780512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.780566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.780586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.783928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.784266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.784301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.787729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.787805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.787825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.791502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.791557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.791578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.795332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.795382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.795402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.799066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.799150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.799170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.802869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.802955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.802975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.806765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.806853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.806873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.810536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.810662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.810682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.813940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.814206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.814226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.817620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.817671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.817690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.821412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.821463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.821483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.825145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.825196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.825215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.828992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.829059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.829080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.832829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.832895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.832915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.836601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.836687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.836707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.840415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.840497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.840516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.844177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.844328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.844348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.882 [2024-12-05 11:06:31.847609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.882 [2024-12-05 11:06:31.847849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.882 [2024-12-05 11:06:31.847868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.851192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.851247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.851266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.854915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.854977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.854999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.858616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.858668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.858687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.862381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.862457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.862477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.866136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.866194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.866214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.869880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.869951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.869970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.873695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.873766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.873787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.877677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.877784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.877804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.881108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.881388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.881416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.884811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.884863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.884882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.888686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.888740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.888761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.892513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.892569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.892590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.896362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.896428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.896447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.900113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.900178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.900197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.903906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.903983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.904003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.907766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.907850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.907870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.911640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.911712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.911733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.915119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.915472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.915498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.918859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.918931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.918951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.922720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.922770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.922790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.926686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.926748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.926768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.930593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.883 [2024-12-05 11:06:31.930692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.883 [2024-12-05 11:06:31.930711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.883 [2024-12-05 11:06:31.934499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.934556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.934577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.938413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.938571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.938591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.942246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.942395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.942415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.945772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.946030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.946057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.949425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.949482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.949501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.953233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.953304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.953325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.957021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.957077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.957097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.960770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.960826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.960847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.964501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.964599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.964619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.968239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.968308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.968329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.972052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.972121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.972140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.975967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.976084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.976104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.979877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.979991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.980011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.983693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.983747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.983767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.987030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.987388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.987412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.990713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.990785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.990804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.994440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.994491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.994511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:31.998155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:31.998220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:31.998240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:32.001891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:32.001943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:32.001963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:32.005727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:32.005785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:32.005803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:32.009529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:32.009625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:32.009644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.884 [2024-12-05 11:06:32.013348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.884 [2024-12-05 11:06:32.013471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.884 [2024-12-05 11:06:32.013490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.885 [2024-12-05 11:06:32.016779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.885 [2024-12-05 11:06:32.017018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.885 [2024-12-05 11:06:32.017037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.885 [2024-12-05 11:06:32.020393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.885 [2024-12-05 11:06:32.020444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.885 [2024-12-05 11:06:32.020463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:04.885 [2024-12-05 11:06:32.024103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.885 [2024-12-05 11:06:32.024162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.885 [2024-12-05 11:06:32.024181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:04.885 [2024-12-05 11:06:32.027883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.885 [2024-12-05 11:06:32.027933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.885 [2024-12-05 11:06:32.027953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:04.885 [2024-12-05 11:06:32.031538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.885 [2024-12-05 11:06:32.031615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.885 [2024-12-05 11:06:32.031651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:04.885 [2024-12-05 11:06:32.035419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:04.885 [2024-12-05 11:06:32.035490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.885 [2024-12-05 11:06:32.035512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.039173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.039256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.039277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.043067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.043145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.043167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.046914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.046976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.046996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.050353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.050696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.050722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.054068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.054166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.054186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.057807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.057856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.057876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.061599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.061656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.061677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.065342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.065393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.065413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.069144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.069207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.069227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.072899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.072956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.072976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.145 [2024-12-05 11:06:32.076861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.145 [2024-12-05 11:06:32.076920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.145 [2024-12-05 11:06:32.076941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.080706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.080829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.080849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.084551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.084698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.084717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.087966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.088234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.088260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.091582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.091641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.091660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.095229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.095292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.095313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.099045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.099100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.099119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.103028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.103084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.103104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.106872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.106935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.106955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.110646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.110704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.110723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.114447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.114512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.114532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.118260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.118360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.118380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.122159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.122218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.122238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.125568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.125913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.125939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.129215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.129300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.129319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.132932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.132981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.133001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.136702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.136760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.136780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.140450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.146 [2024-12-05 11:06:32.140529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.146 [2024-12-05 11:06:32.140548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.146 [2024-12-05 11:06:32.144211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.144292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.144312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.148040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.148157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.148176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.151837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.151976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.151997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.155328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.155583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.155609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.159036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.159093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.159114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.162860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.162915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.162936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.166693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.166746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.166768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.170551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.170604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.170625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.174331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.174390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.174410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.178059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.178158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.178178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.181851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.181928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.181948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.185645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.185713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.185733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.189419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.189469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.189488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.192809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.193147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.193167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.196498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.196567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.196586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.200176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.200228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.200248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.204010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.204060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.204081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.207773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.207826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.147 [2024-12-05 11:06:32.207847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.147 [2024-12-05 11:06:32.211626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.147 [2024-12-05 11:06:32.211714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.211735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.215431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.215547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.215568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.219239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.219382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.219402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.222750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.223008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.223028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.226423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.226478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.226499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.230293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.230344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.230363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.234093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.234144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.234164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.237832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.237887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.237907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.241540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.241606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.241626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.245290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.245345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.245365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.248902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.248973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.248993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.252612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.252663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.252682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.255929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.256278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.256311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.259885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.259972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.259993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.263686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.263740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.263759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.267389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.267441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.267461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.271294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.271348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.271369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.148 [2024-12-05 11:06:32.275152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.148 [2024-12-05 11:06:32.275205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.148 [2024-12-05 11:06:32.275225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.149 [2024-12-05 11:06:32.278938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.149 [2024-12-05 11:06:32.278993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.149 [2024-12-05 11:06:32.279013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.149 [2024-12-05 11:06:32.282615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.149 [2024-12-05 11:06:32.282684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.149 [2024-12-05 11:06:32.282704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.149 [2024-12-05 11:06:32.286248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.149 [2024-12-05 11:06:32.286325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.149 [2024-12-05 11:06:32.286345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.149 [2024-12-05 11:06:32.290056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.149 [2024-12-05 11:06:32.290180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.149 [2024-12-05 11:06:32.290199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.149 [2024-12-05 11:06:32.293829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.149 [2024-12-05 11:06:32.293957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.149 [2024-12-05 11:06:32.293977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.149 [2024-12-05 11:06:32.297291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.149 [2024-12-05 11:06:32.297530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.149 [2024-12-05 11:06:32.297550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.149 [2024-12-05 11:06:32.300923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.149 [2024-12-05 11:06:32.300977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.149 [2024-12-05 11:06:32.300998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.304772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.304828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.304848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.308549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.308599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.308619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.312378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.312430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.312451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.316219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.316283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.316304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.320177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.320245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.320265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.324014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.324082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.324102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.327934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.327985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.328004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.331409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.331737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.331762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.335114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.335195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.335216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.338938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.338998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.339017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.409 [2024-12-05 11:06:32.342705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.409 [2024-12-05 11:06:32.342765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.409 [2024-12-05 11:06:32.342785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.346488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.346543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.346563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.350241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.350338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.350358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.354015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.354158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.354177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.357755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.357898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.357917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.361083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.361331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.361351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.364719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.364773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.364792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.368567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.368618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.368638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.372285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.372338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.372359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.376139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.376199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.376222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.380057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.380127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.380148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.383815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.383880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.383900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.387467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.387541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.387561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.391237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.391402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.391421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.394567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.394826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.394846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.398017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.398068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.398097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.401705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.401756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.401775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.405345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.405398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.405417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.409032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.409090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.409110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.412780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.412830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.412850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.416603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.416699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.416719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.420427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.420526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.420547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.410 [2024-12-05 11:06:32.424286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.410 [2024-12-05 11:06:32.424339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.410 [2024-12-05 11:06:32.424361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.427702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.428060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.428080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.431425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.431509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.431529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.435198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.435250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.435269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.438988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.439043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.439064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.442746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.442802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.442822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.446432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.446513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.446532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.450139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.450240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.450259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.453935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.453988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.454008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.457666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.457715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.457734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.461040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.461377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.461402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.464714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.464791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.464810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.468497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.468551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.468571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.472189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.472240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.472260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.475950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.476011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.476030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.479665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.479781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.479802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.483267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.483380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.483400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.486872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.486990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.487010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:05.411 [2024-12-05 11:06:32.490188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.490451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.490471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:05.411 8391.50 IOPS, 1048.94 MiB/s [2024-12-05T11:06:32.570Z] [2024-12-05 11:06:32.494591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14d7c20) with pdu=0x200016eff3c8 00:23:05.411 [2024-12-05 11:06:32.494643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:05.411 [2024-12-05 11:06:32.494664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:05.411 00:23:05.411 Latency(us) 00:23:05.411 [2024-12-05T11:06:32.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.411 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:05.411 nvme0n1 : 2.00 8387.79 1048.47 0.00 0.00 1903.82 1315.98 10896.35 00:23:05.411 [2024-12-05T11:06:32.570Z] =================================================================================================================== 00:23:05.411 [2024-12-05T11:06:32.570Z] Total : 8387.79 1048.47 0.00 0.00 1903.82 1315.98 10896.35 00:23:05.411 { 00:23:05.411 "results": [ 00:23:05.411 { 00:23:05.411 "job": "nvme0n1", 00:23:05.411 "core_mask": "0x2", 00:23:05.411 "workload": "randwrite", 00:23:05.411 "status": "finished", 00:23:05.411 "queue_depth": 16, 00:23:05.411 "io_size": 131072, 00:23:05.411 "runtime": 2.003269, 00:23:05.411 "iops": 8387.790156988402, 00:23:05.411 "mibps": 1048.4737696235502, 00:23:05.411 "io_failed": 0, 00:23:05.411 "io_timeout": 0, 00:23:05.411 "avg_latency_us": 1903.8202462889708, 00:23:05.411 "min_latency_us": 1315.9839357429719, 00:23:05.411 "max_latency_us": 10896.346987951807 00:23:05.411 } 00:23:05.411 ], 00:23:05.412 "core_count": 1 00:23:05.412 } 00:23:05.412 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:05.412 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:05.412 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:05.412 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:05.412 | .driver_specific 00:23:05.412 | .nvme_error 00:23:05.412 | .status_code 00:23:05.412 | .command_transient_transport_error' 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 543 > 0 )) 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80661 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80661 ']' 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80661 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80661 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80661' 00:23:05.670 killing process with pid 80661 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80661 00:23:05.670 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.670 00:23:05.670 Latency(us) 00:23:05.670 [2024-12-05T11:06:32.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.670 [2024-12-05T11:06:32.829Z] =================================================================================================================== 00:23:05.670 [2024-12-05T11:06:32.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.670 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80661 00:23:05.929 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80447 00:23:05.929 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80447 ']' 00:23:05.929 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80447 00:23:05.929 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:23:05.929 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.929 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80447 00:23:05.929 11:06:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:05.929 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:05.929 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80447' 00:23:05.929 killing process with pid 80447 00:23:05.929 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80447 00:23:05.929 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80447 00:23:06.187 00:23:06.187 real 0m17.939s 00:23:06.187 user 0m33.536s 00:23:06.187 sys 0m5.654s 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:06.187 ************************************ 00:23:06.187 END TEST nvmf_digest_error 00:23:06.187 ************************************ 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:06.187 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:06.187 rmmod nvme_tcp 00:23:06.187 rmmod nvme_fabrics 00:23:06.187 rmmod nvme_keyring 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 80447 ']' 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 80447 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80447 ']' 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80447 00:23:06.446 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80447) - No such process 00:23:06.446 Process with pid 80447 is not found 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80447 is not found' 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@254 -- # local dev 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # continue 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # continue 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:06.446 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:23:06.447 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:23:06.447 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@274 -- # iptr 00:23:06.447 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-save 00:23:06.447 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:06.447 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-restore 00:23:06.447 00:23:06.447 real 0m36.942s 00:23:06.447 user 1m6.927s 00:23:06.447 sys 0m11.592s 00:23:06.447 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.447 11:06:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:06.447 ************************************ 00:23:06.447 END TEST nvmf_digest 00:23:06.447 ************************************ 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.706 ************************************ 00:23:06.706 START TEST nvmf_host_multipath 00:23:06.706 ************************************ 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:06.706 * Looking for test storage... 00:23:06.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:23:06.706 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:06.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.968 --rc genhtml_branch_coverage=1 00:23:06.968 --rc genhtml_function_coverage=1 00:23:06.968 --rc genhtml_legend=1 00:23:06.968 --rc geninfo_all_blocks=1 00:23:06.968 --rc geninfo_unexecuted_blocks=1 00:23:06.968 00:23:06.968 ' 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:06.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.968 --rc genhtml_branch_coverage=1 00:23:06.968 --rc genhtml_function_coverage=1 00:23:06.968 --rc genhtml_legend=1 00:23:06.968 --rc geninfo_all_blocks=1 00:23:06.968 --rc geninfo_unexecuted_blocks=1 00:23:06.968 00:23:06.968 ' 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:06.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.968 --rc genhtml_branch_coverage=1 00:23:06.968 --rc genhtml_function_coverage=1 00:23:06.968 --rc genhtml_legend=1 00:23:06.968 --rc geninfo_all_blocks=1 00:23:06.968 --rc geninfo_unexecuted_blocks=1 00:23:06.968 00:23:06.968 ' 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:06.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.968 --rc genhtml_branch_coverage=1 00:23:06.968 --rc genhtml_function_coverage=1 00:23:06.968 --rc genhtml_legend=1 00:23:06.968 --rc geninfo_all_blocks=1 00:23:06.968 --rc geninfo_unexecuted_blocks=1 00:23:06.968 00:23:06.968 ' 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.968 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@50 -- # : 0 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:06.969 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # return 0 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:23:06.969 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # ips=() 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:06.970 11:06:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:06.970 10.0.0.1 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:23:06.970 10.0.0.2 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:23:06.970 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # ips=() 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:23:07.231 10.0.0.3 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:23:07.231 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:23:07.232 10.0.0.4 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.232 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:07.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:23:07.492 00:23:07.492 --- 10.0.0.1 ping statistics --- 00:23:07.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.492 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target0 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:07.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:23:07.492 00:23:07.492 --- 10.0.0.2 ping statistics --- 00:23:07.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.492 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:23:07.492 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.492 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:23:07.492 00:23:07.492 --- 10.0.0.3 ping statistics --- 00:23:07.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.492 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:07.492 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:23:07.493 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:07.493 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.146 ms 00:23:07.493 00:23:07.493 --- 10.0.0.4 ping statistics --- 00:23:07.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.493 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@281 -- # return 0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:07.493 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target1 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@328 -- # nvmfpid=80979 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@329 -- # waitforlisten 80979 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80979 ']' 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.494 11:06:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:07.753 [2024-12-05 11:06:34.695906] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:23:07.753 [2024-12-05 11:06:34.695980] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.753 [2024-12-05 11:06:34.849611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:07.753 [2024-12-05 11:06:34.895748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.753 [2024-12-05 11:06:34.895799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.753 [2024-12-05 11:06:34.895810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.753 [2024-12-05 11:06:34.895818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.753 [2024-12-05 11:06:34.895825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.753 [2024-12-05 11:06:34.896833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.753 [2024-12-05 11:06:34.896833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.033 [2024-12-05 11:06:34.938249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:08.600 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.600 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:23:08.600 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:08.600 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.600 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:08.600 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.600 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80979 00:23:08.600 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:08.859 [2024-12-05 11:06:35.841502] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.859 11:06:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:09.118 Malloc0 00:23:09.118 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:09.378 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:09.378 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.637 [2024-12-05 11:06:36.722497] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.637 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:09.896 [2024-12-05 11:06:36.938378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81029 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81029 /var/tmp/bdevperf.sock 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81029 ']' 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:09.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.896 11:06:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:10.833 11:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:10.833 11:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:23:10.833 11:06:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:11.092 11:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:11.351 Nvme0n1 00:23:11.351 11:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:11.610 Nvme0n1 00:23:11.869 11:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:11.869 11:06:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:23:12.808 11:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:12.808 11:06:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:13.069 11:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:13.069 11:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:13.069 11:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81080 00:23:13.069 11:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:13.069 11:06:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80979 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:19.635 Attaching 4 probes... 00:23:19.635 @path[10.0.0.2, 4421]: 22920 00:23:19.635 @path[10.0.0.2, 4421]: 23328 00:23:19.635 @path[10.0.0.2, 4421]: 23040 00:23:19.635 @path[10.0.0.2, 4421]: 23066 00:23:19.635 @path[10.0.0.2, 4421]: 23201 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81080 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:19.635 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:19.893 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:19.893 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81192 00:23:19.893 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:19.893 11:06:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80979 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:26.461 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:26.461 11:06:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.461 Attaching 4 probes... 00:23:26.461 @path[10.0.0.2, 4420]: 22481 00:23:26.461 @path[10.0.0.2, 4420]: 22761 00:23:26.461 @path[10.0.0.2, 4420]: 22242 00:23:26.461 @path[10.0.0.2, 4420]: 22981 00:23:26.461 @path[10.0.0.2, 4420]: 23100 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81192 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:26.461 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:26.726 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:26.726 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80979 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:26.726 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81306 00:23:26.726 11:06:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.317 Attaching 4 probes... 00:23:33.317 @path[10.0.0.2, 4421]: 18074 00:23:33.317 @path[10.0.0.2, 4421]: 22788 00:23:33.317 @path[10.0.0.2, 4421]: 22743 00:23:33.317 @path[10.0.0.2, 4421]: 22516 00:23:33.317 @path[10.0.0.2, 4421]: 22642 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81306 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:33.317 11:06:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:33.317 11:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:33.317 11:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:33.317 11:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80979 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:33.317 11:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81418 00:23:33.317 11:07:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:39.914 Attaching 4 probes... 00:23:39.914 00:23:39.914 00:23:39.914 00:23:39.914 00:23:39.914 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81418 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:39.914 11:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:39.914 11:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:39.914 11:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81536 00:23:39.914 11:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:39.914 11:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80979 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.479 Attaching 4 probes... 00:23:46.479 @path[10.0.0.2, 4421]: 21339 00:23:46.479 @path[10.0.0.2, 4421]: 22206 00:23:46.479 @path[10.0.0.2, 4421]: 22341 00:23:46.479 @path[10.0.0.2, 4421]: 22244 00:23:46.479 @path[10.0.0.2, 4421]: 21128 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81536 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.479 11:07:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:23:47.856 11:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:47.856 11:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81654 00:23:47.856 11:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80979 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:47.856 11:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.432 Attaching 4 probes... 00:23:54.432 @path[10.0.0.2, 4420]: 19218 00:23:54.432 @path[10.0.0.2, 4420]: 20011 00:23:54.432 @path[10.0.0.2, 4420]: 20632 00:23:54.432 @path[10.0.0.2, 4420]: 20319 00:23:54.432 @path[10.0.0.2, 4420]: 20265 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81654 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:54.432 11:07:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:54.432 [2024-12-05 11:07:21.043983] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:54.432 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:54.432 11:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:24:01.003 11:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:01.003 11:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80979 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:01.003 11:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81828 00:24:01.003 11:07:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:06.298 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:06.298 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:06.557 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:06.557 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:06.557 Attaching 4 probes... 00:24:06.557 @path[10.0.0.2, 4421]: 19455 00:24:06.557 @path[10.0.0.2, 4421]: 19544 00:24:06.557 @path[10.0.0.2, 4421]: 19520 00:24:06.557 @path[10.0.0.2, 4421]: 19680 00:24:06.557 @path[10.0.0.2, 4421]: 19659 00:24:06.557 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:06.557 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:06.557 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:06.557 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81828 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81029 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81029 ']' 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81029 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81029 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:06.558 killing process with pid 81029 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81029' 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81029 00:24:06.558 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81029 00:24:06.558 { 00:24:06.558 "results": [ 00:24:06.558 { 00:24:06.558 "job": "Nvme0n1", 00:24:06.558 "core_mask": "0x4", 00:24:06.558 "workload": "verify", 00:24:06.558 "status": "terminated", 00:24:06.558 "verify_range": { 00:24:06.558 "start": 0, 00:24:06.558 "length": 16384 00:24:06.558 }, 00:24:06.558 "queue_depth": 128, 00:24:06.558 "io_size": 4096, 00:24:06.558 "runtime": 54.793829, 00:24:06.558 "iops": 9185.888432801437, 00:24:06.558 "mibps": 35.882376690630615, 00:24:06.558 "io_failed": 0, 00:24:06.558 "io_timeout": 0, 00:24:06.558 "avg_latency_us": 13918.538313556213, 00:24:06.558 "min_latency_us": 152.1606425702811, 00:24:06.558 "max_latency_us": 7061253.963052209 00:24:06.558 } 00:24:06.558 ], 00:24:06.558 "core_count": 1 00:24:06.558 } 00:24:06.826 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81029 00:24:06.826 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:06.826 [2024-12-05 11:06:37.013096] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:24:06.826 [2024-12-05 11:06:37.013206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81029 ] 00:24:06.826 [2024-12-05 11:06:37.152312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.826 [2024-12-05 11:06:37.245345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.826 [2024-12-05 11:06:37.318333] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:06.826 Running I/O for 90 seconds... 00:24:06.826 10752.00 IOPS, 42.00 MiB/s [2024-12-05T11:07:33.985Z] 11358.50 IOPS, 44.37 MiB/s [2024-12-05T11:07:33.985Z] 11495.00 IOPS, 44.90 MiB/s [2024-12-05T11:07:33.985Z] 11533.25 IOPS, 45.05 MiB/s [2024-12-05T11:07:33.985Z] 11527.40 IOPS, 45.03 MiB/s [2024-12-05T11:07:33.985Z] 11526.17 IOPS, 45.02 MiB/s [2024-12-05T11:07:33.985Z] 11534.43 IOPS, 45.06 MiB/s [2024-12-05T11:07:33.985Z] 11504.62 IOPS, 44.94 MiB/s [2024-12-05T11:07:33.985Z] [2024-12-05 11:06:46.883299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.826 [2024-12-05 11:06:46.883366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.826 [2024-12-05 11:06:46.883437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.826 [2024-12-05 11:06:46.883472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.826 [2024-12-05 11:06:46.883506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.826 [2024-12-05 11:06:46.883539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.826 [2024-12-05 11:06:46.883574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.826 [2024-12-05 11:06:46.883607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.826 [2024-12-05 11:06:46.883640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.826 [2024-12-05 11:06:46.883673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.826 [2024-12-05 11:06:46.883730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.826 [2024-12-05 11:06:46.883765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.826 [2024-12-05 11:06:46.883799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.826 [2024-12-05 11:06:46.883832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:06.826 [2024-12-05 11:06:46.883852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.826 [2024-12-05 11:06:46.883866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.883886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.883899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.883919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.883933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.884715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.884749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.884784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.884823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.884858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.884893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.884927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.884962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.884982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.884996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.885030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.885066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.885101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.885136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.885171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.885205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.885245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.827 [2024-12-05 11:06:46.885279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.885838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.885878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.827 [2024-12-05 11:06:46.885913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:06.827 [2024-12-05 11:06:46.885933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.885947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.885968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.885982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.828 [2024-12-05 11:06:46.886680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.886969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.886983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:06.828 [2024-12-05 11:06:46.887335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.828 [2024-12-05 11:06:46.887349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.887597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.887631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.887665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.887700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.887734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.887769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.887803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.887838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.887972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.887991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.888025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.888060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.888094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.829 [2024-12-05 11:06:46.888128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:06.829 [2024-12-05 11:06:46.888686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.829 [2024-12-05 11:06:46.888700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:06.829 11452.00 IOPS, 44.73 MiB/s [2024-12-05T11:07:33.988Z] 11446.00 IOPS, 44.71 MiB/s [2024-12-05T11:07:33.988Z] 11436.00 IOPS, 44.67 MiB/s [2024-12-05T11:07:33.988Z] 11411.67 IOPS, 44.58 MiB/s [2024-12-05T11:07:33.989Z] 11418.77 IOPS, 44.60 MiB/s [2024-12-05T11:07:33.989Z] 11428.57 IOPS, 44.64 MiB/s [2024-12-05T11:07:33.989Z] [2024-12-05 11:06:53.390996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.391737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.830 [2024-12-05 11:06:53.391768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.830 [2024-12-05 11:06:53.391799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.830 [2024-12-05 11:06:53.391829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.830 [2024-12-05 11:06:53.391860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.830 [2024-12-05 11:06:53.391891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.830 [2024-12-05 11:06:53.391922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.830 [2024-12-05 11:06:53.391952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.391970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.830 [2024-12-05 11:06:53.391987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.392007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.392019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.392037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.392050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.392068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.392081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.392099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.392112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.392134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.392147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.392166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.392179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.392197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.830 [2024-12-05 11:06:53.392211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:06.830 [2024-12-05 11:06:53.392229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.831 [2024-12-05 11:06:53.392573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.392981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.392999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:06.831 [2024-12-05 11:06:53.393293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.831 [2024-12-05 11:06:53.393305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.393940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.393971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.393994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.832 [2024-12-05 11:06:53.394469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.394500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.394531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.394562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:06.832 [2024-12-05 11:06:53.394581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.832 [2024-12-05 11:06:53.394594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.394612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.833 [2024-12-05 11:06:53.394625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.394643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.833 [2024-12-05 11:06:53.394656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.394674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.833 [2024-12-05 11:06:53.394686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.833 [2024-12-05 11:06:53.395311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:06:53.395918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:06:53.395931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:06.833 11080.27 IOPS, 43.28 MiB/s [2024-12-05T11:07:33.992Z] 10712.06 IOPS, 41.84 MiB/s [2024-12-05T11:07:33.992Z] 10748.53 IOPS, 41.99 MiB/s [2024-12-05T11:07:33.992Z] 10783.83 IOPS, 42.12 MiB/s [2024-12-05T11:07:33.992Z] 10808.26 IOPS, 42.22 MiB/s [2024-12-05T11:07:33.992Z] 10831.85 IOPS, 42.31 MiB/s [2024-12-05T11:07:33.992Z] 10854.33 IOPS, 42.40 MiB/s [2024-12-05T11:07:33.992Z] [2024-12-05 11:07:00.308387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.308973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.308985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.309003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.309016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:06.833 [2024-12-05 11:07:00.309036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.833 [2024-12-05 11:07:00.309049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.834 [2024-12-05 11:07:00.309843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.309971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.309989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.834 [2024-12-05 11:07:00.310382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:06.834 [2024-12-05 11:07:00.310401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.310414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.310445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.310482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.310513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.310544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.310576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.310608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.310639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.310980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.310993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.311024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.311056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.311087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.311119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.835 [2024-12-05 11:07:00.311150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.835 [2024-12-05 11:07:00.311516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:06.835 [2024-12-05 11:07:00.311535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.311803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:00.311835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:00.311867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:00.311898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:00.311929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:00.311961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.311980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:00.311992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:00.312024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:00.312631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.312967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.312980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:00.313322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.836 [2024-12-05 11:07:00.313336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:06.836 10595.86 IOPS, 41.39 MiB/s [2024-12-05T11:07:33.995Z] 10135.17 IOPS, 39.59 MiB/s [2024-12-05T11:07:33.995Z] 9712.88 IOPS, 37.94 MiB/s [2024-12-05T11:07:33.995Z] 9324.36 IOPS, 36.42 MiB/s [2024-12-05T11:07:33.995Z] 8965.73 IOPS, 35.02 MiB/s [2024-12-05T11:07:33.995Z] 8633.67 IOPS, 33.73 MiB/s [2024-12-05T11:07:33.995Z] 8325.32 IOPS, 32.52 MiB/s [2024-12-05T11:07:33.995Z] 8227.86 IOPS, 32.14 MiB/s [2024-12-05T11:07:33.995Z] 8316.87 IOPS, 32.49 MiB/s [2024-12-05T11:07:33.995Z] 8407.81 IOPS, 32.84 MiB/s [2024-12-05T11:07:33.995Z] 8493.06 IOPS, 33.18 MiB/s [2024-12-05T11:07:33.995Z] 8571.76 IOPS, 33.48 MiB/s [2024-12-05T11:07:33.995Z] 8627.18 IOPS, 33.70 MiB/s [2024-12-05T11:07:33.995Z] [2024-12-05 11:07:13.549122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:13.549197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.836 [2024-12-05 11:07:13.549223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.836 [2024-12-05 11:07:13.549237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.549268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.549309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.549361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.549390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.549419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.549448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.549951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.549969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.549986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.550021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.550055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.550090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.550141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.550178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.550207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.837 [2024-12-05 11:07:13.550236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.837 [2024-12-05 11:07:13.550534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.837 [2024-12-05 11:07:13.550548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.550582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.550611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.550640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.550671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.550707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.550736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.550766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.550795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.550825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.550854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.550883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.550912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.550948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.550971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.550985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.551513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.551542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.551571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.551600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.551629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.551658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.551687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.838 [2024-12-05 11:07:13.551716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.838 [2024-12-05 11:07:13.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.838 [2024-12-05 11:07:13.551780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.551796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.551809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.551824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.551838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.551858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.551873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.551888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.551902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.551918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.551931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.551947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.551960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.551975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.551989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:48360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.552018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.552047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.552076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.552116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:48392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.552147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.552177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.839 [2024-12-05 11:07:13.552206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.839 [2024-12-05 11:07:13.552875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.839 [2024-12-05 11:07:13.552896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.840 [2024-12-05 11:07:13.552910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.552925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.840 [2024-12-05 11:07:13.552938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.552953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.840 [2024-12-05 11:07:13.552967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.840 [2024-12-05 11:07:13.553003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.840 [2024-12-05 11:07:13.553032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.840 [2024-12-05 11:07:13.553062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.840 [2024-12-05 11:07:13.553091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.840 [2024-12-05 11:07:13.553120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.840 [2024-12-05 11:07:13.553149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.840 [2024-12-05 11:07:13.553216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.840 [2024-12-05 11:07:13.553231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48472 len:8 PRP1 0x0 PRP2 0x0 00:24:06.840 [2024-12-05 11:07:13.553245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.840 [2024-12-05 11:07:13.553412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.840 [2024-12-05 11:07:13.553441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.840 [2024-12-05 11:07:13.553479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.840 [2024-12-05 11:07:13.553507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.840 [2024-12-05 11:07:13.553522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16361e0 is same with the state(6) to be set 00:24:06.840 [2024-12-05 11:07:13.554524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:06.840 [2024-12-05 11:07:13.554565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16361e0 (9): Bad file descriptor 00:24:06.840 [2024-12-05 11:07:13.554877] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:06.840 [2024-12-05 11:07:13.554904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16361e0 with addr=10.0.0.2, port=4421 00:24:06.840 [2024-12-05 11:07:13.554920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16361e0 is same with the state(6) to be set 00:24:06.840 [2024-12-05 11:07:13.555049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16361e0 (9): Bad file descriptor 00:24:06.840 [2024-12-05 11:07:13.555101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:06.840 [2024-12-05 11:07:13.555119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:06.840 [2024-12-05 11:07:13.555135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:06.840 [2024-12-05 11:07:13.555149] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:06.840 [2024-12-05 11:07:13.555164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:06.840 8677.17 IOPS, 33.90 MiB/s [2024-12-05T11:07:33.999Z] 8712.08 IOPS, 34.03 MiB/s [2024-12-05T11:07:33.999Z] 8739.81 IOPS, 34.14 MiB/s [2024-12-05T11:07:33.999Z] 8769.13 IOPS, 34.25 MiB/s [2024-12-05T11:07:33.999Z] 8807.87 IOPS, 34.41 MiB/s [2024-12-05T11:07:33.999Z] 8840.88 IOPS, 34.53 MiB/s [2024-12-05T11:07:33.999Z] 8872.46 IOPS, 34.66 MiB/s [2024-12-05T11:07:33.999Z] 8903.12 IOPS, 34.78 MiB/s [2024-12-05T11:07:33.999Z] 8937.19 IOPS, 34.91 MiB/s [2024-12-05T11:07:33.999Z] 8975.52 IOPS, 35.06 MiB/s [2024-12-05T11:07:33.999Z] [2024-12-05 11:07:23.582645] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:06.840 9020.31 IOPS, 35.24 MiB/s [2024-12-05T11:07:33.999Z] 9064.04 IOPS, 35.41 MiB/s [2024-12-05T11:07:33.999Z] 9088.55 IOPS, 35.50 MiB/s [2024-12-05T11:07:33.999Z] 9102.38 IOPS, 35.56 MiB/s [2024-12-05T11:07:33.999Z] 9116.94 IOPS, 35.61 MiB/s [2024-12-05T11:07:33.999Z] 9129.16 IOPS, 35.66 MiB/s [2024-12-05T11:07:33.999Z] 9141.53 IOPS, 35.71 MiB/s [2024-12-05T11:07:33.999Z] 9153.27 IOPS, 35.75 MiB/s [2024-12-05T11:07:33.999Z] 9166.08 IOPS, 35.80 MiB/s [2024-12-05T11:07:33.999Z] 9178.26 IOPS, 35.85 MiB/s [2024-12-05T11:07:33.999Z] Received shutdown signal, test time was about 54.794503 seconds 00:24:06.840 00:24:06.840 Latency(us) 00:24:06.840 [2024-12-05T11:07:33.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.840 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.840 Verification LBA range: start 0x0 length 0x4000 00:24:06.840 Nvme0n1 : 54.79 9185.89 35.88 0.00 0.00 13918.54 152.16 7061253.96 00:24:06.840 [2024-12-05T11:07:33.999Z] =================================================================================================================== 00:24:06.840 [2024-12-05T11:07:33.999Z] Total : 9185.89 35.88 0.00 0.00 13918.54 152.16 7061253.96 00:24:06.840 11:07:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@99 -- # sync 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@102 -- # set +e 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:07.100 rmmod nvme_tcp 00:24:07.100 rmmod nvme_fabrics 00:24:07.100 rmmod nvme_keyring 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@106 -- # set -e 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@107 -- # return 0 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@336 -- # '[' -n 80979 ']' 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@337 -- # killprocess 80979 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80979 ']' 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80979 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80979 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.100 killing process with pid 80979 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80979' 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80979 00:24:07.100 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80979 00:24:07.359 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@254 -- # local dev 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:07.360 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # continue 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # continue 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@274 -- # iptr 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # iptables-save 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:24:07.619 00:24:07.619 real 1m0.965s 00:24:07.619 user 2m42.145s 00:24:07.619 sys 0m24.220s 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:07.619 ************************************ 00:24:07.619 END TEST nvmf_host_multipath 00:24:07.619 ************************************ 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.619 ************************************ 00:24:07.619 START TEST nvmf_timeout 00:24:07.619 ************************************ 00:24:07.619 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:07.879 * Looking for test storage... 00:24:07.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:07.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.879 --rc genhtml_branch_coverage=1 00:24:07.879 --rc genhtml_function_coverage=1 00:24:07.879 --rc genhtml_legend=1 00:24:07.879 --rc geninfo_all_blocks=1 00:24:07.879 --rc geninfo_unexecuted_blocks=1 00:24:07.879 00:24:07.879 ' 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:07.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.879 --rc genhtml_branch_coverage=1 00:24:07.879 --rc genhtml_function_coverage=1 00:24:07.879 --rc genhtml_legend=1 00:24:07.879 --rc geninfo_all_blocks=1 00:24:07.879 --rc geninfo_unexecuted_blocks=1 00:24:07.879 00:24:07.879 ' 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:07.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.879 --rc genhtml_branch_coverage=1 00:24:07.879 --rc genhtml_function_coverage=1 00:24:07.879 --rc genhtml_legend=1 00:24:07.879 --rc geninfo_all_blocks=1 00:24:07.879 --rc geninfo_unexecuted_blocks=1 00:24:07.879 00:24:07.879 ' 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:07.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.879 --rc genhtml_branch_coverage=1 00:24:07.879 --rc genhtml_function_coverage=1 00:24:07.879 --rc genhtml_legend=1 00:24:07.879 --rc geninfo_all_blocks=1 00:24:07.879 --rc geninfo_unexecuted_blocks=1 00:24:07.879 00:24:07.879 ' 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.879 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@50 -- # : 0 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:07.880 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@260 -- # remove_target_ns 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@223 -- # create_target_ns 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # return 0 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:07.880 11:07:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@28 -- # local -g _dev 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # ips=() 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:07.880 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up target0 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772161 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:08.138 10.0.0.1 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772162 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:08.138 10.0.0.2 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # ips=() 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:08.138 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up target1 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772163 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:08.139 10.0.0.3 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772164 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:08.139 10.0.0.4 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:08.139 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:08.397 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:08.397 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:08.397 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:08.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:08.398 00:24:08.398 --- 10.0.0.1 ping statistics --- 00:24:08.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.398 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:08.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:24:08.398 00:24:08.398 --- 10.0.0.2 ping statistics --- 00:24:08.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.398 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:08.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:08.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:24:08.398 00:24:08.398 --- 10.0.0.3 ping statistics --- 00:24:08.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.398 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:08.398 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:08.398 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.141 ms 00:24:08.398 00:24:08.398 --- 10.0.0.4 ping statistics --- 00:24:08.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.398 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@281 -- # return 0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:08.398 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator1 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target0 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target0 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target0 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target1 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target1 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target1 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:08.399 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@328 -- # nvmfpid=82189 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@329 -- # waitforlisten 82189 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82189 ']' 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.657 11:07:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:08.657 [2024-12-05 11:07:35.672504] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:24:08.657 [2024-12-05 11:07:35.672591] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.657 [2024-12-05 11:07:35.814329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:08.914 [2024-12-05 11:07:35.882355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.915 [2024-12-05 11:07:35.882410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.915 [2024-12-05 11:07:35.882421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.915 [2024-12-05 11:07:35.882430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.915 [2024-12-05 11:07:35.882438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.915 [2024-12-05 11:07:35.883349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.915 [2024-12-05 11:07:35.883353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.915 [2024-12-05 11:07:35.926405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:09.849 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.849 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:09.849 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:09.849 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.849 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:09.849 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.849 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.850 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:09.850 [2024-12-05 11:07:36.930317] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.850 11:07:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.108 Malloc0 00:24:10.108 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.367 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.627 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.887 [2024-12-05 11:07:37.880113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82244 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82244 /var/tmp/bdevperf.sock 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82244 ']' 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.887 11:07:37 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:10.887 [2024-12-05 11:07:37.934901] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:24:10.887 [2024-12-05 11:07:37.934988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82244 ] 00:24:11.147 [2024-12-05 11:07:38.089205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.147 [2024-12-05 11:07:38.144127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.147 [2024-12-05 11:07:38.185743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:11.714 11:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.714 11:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:11.714 11:07:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:11.974 11:07:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:12.233 NVMe0n1 00:24:12.233 11:07:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.233 11:07:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82266 00:24:12.233 11:07:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:24:12.493 Running I/O for 10 seconds... 00:24:13.458 11:07:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.458 11300.00 IOPS, 44.14 MiB/s [2024-12-05T11:07:40.617Z] [2024-12-05 11:07:40.550843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.550901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.550921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.550932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.550946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.550955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.550967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.550976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.550987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.550997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.458 [2024-12-05 11:07:40.551241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.458 [2024-12-05 11:07:40.551262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.458 [2024-12-05 11:07:40.551293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.458 [2024-12-05 11:07:40.551322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.458 [2024-12-05 11:07:40.551343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.458 [2024-12-05 11:07:40.551364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.458 [2024-12-05 11:07:40.551384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.458 [2024-12-05 11:07:40.551404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.458 [2024-12-05 11:07:40.551658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.458 [2024-12-05 11:07:40.551667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.551687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.551707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.551727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.551747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.551767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.551789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.551809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.551829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.551849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.551869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.551889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.551911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.551932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.551952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.551972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.551983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.551992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.459 [2024-12-05 11:07:40.552310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.552330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.552350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.552371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.552391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.552411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.552431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.459 [2024-12-05 11:07:40.552441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.459 [2024-12-05 11:07:40.552451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:13.460 [2024-12-05 11:07:40.552930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.552982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.552991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.553011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.553031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.553051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.460 [2024-12-05 11:07:40.553071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1059970 is same with the state(6) to be set 00:24:13.460 [2024-12-05 11:07:40.553095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.460 [2024-12-05 11:07:40.553102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.460 [2024-12-05 11:07:40.553111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103224 len:8 PRP1 0x0 PRP2 0x0 00:24:13.460 [2024-12-05 11:07:40.553120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.460 [2024-12-05 11:07:40.553138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.460 [2024-12-05 11:07:40.553146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103824 len:8 PRP1 0x0 PRP2 0x0 00:24:13.460 [2024-12-05 11:07:40.553155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.460 [2024-12-05 11:07:40.553174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.460 [2024-12-05 11:07:40.553182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103832 len:8 PRP1 0x0 PRP2 0x0 00:24:13.460 [2024-12-05 11:07:40.553191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.460 [2024-12-05 11:07:40.553208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.460 [2024-12-05 11:07:40.553217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103840 len:8 PRP1 0x0 PRP2 0x0 00:24:13.460 [2024-12-05 11:07:40.553226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.460 [2024-12-05 11:07:40.553236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.460 [2024-12-05 11:07:40.553243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.460 [2024-12-05 11:07:40.553251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103848 len:8 PRP1 0x0 PRP2 0x0 00:24:13.460 [2024-12-05 11:07:40.553260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103856 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103864 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103872 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103880 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103888 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103896 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103904 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103912 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103920 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103928 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103232 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103240 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103248 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103256 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103264 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103272 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103280 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.553859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:13.461 [2024-12-05 11:07:40.553866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:13.461 [2024-12-05 11:07:40.553874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103288 len:8 PRP1 0x0 PRP2 0x0 00:24:13.461 [2024-12-05 11:07:40.553883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:13.461 [2024-12-05 11:07:40.554156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:13.461 [2024-12-05 11:07:40.554236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9e50 (9): Bad file descriptor 00:24:13.461 [2024-12-05 11:07:40.554343] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:13.461 [2024-12-05 11:07:40.554360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff9e50 with addr=10.0.0.2, port=4420 00:24:13.461 [2024-12-05 11:07:40.554370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9e50 is same with the state(6) to be set 00:24:13.461 [2024-12-05 11:07:40.554386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9e50 (9): Bad file descriptor 00:24:13.461 [2024-12-05 11:07:40.554400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:13.461 [2024-12-05 11:07:40.554411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:13.461 [2024-12-05 11:07:40.554421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:13.461 [2024-12-05 11:07:40.554431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:13.461 [2024-12-05 11:07:40.554442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:13.461 11:07:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:24:15.334 6432.00 IOPS, 25.12 MiB/s [2024-12-05T11:07:42.753Z] 4288.00 IOPS, 16.75 MiB/s [2024-12-05T11:07:42.753Z] [2024-12-05 11:07:42.551400] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:15.594 [2024-12-05 11:07:42.551472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff9e50 with addr=10.0.0.2, port=4420 00:24:15.594 [2024-12-05 11:07:42.551485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9e50 is same with the state(6) to be set 00:24:15.594 [2024-12-05 11:07:42.551507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9e50 (9): Bad file descriptor 00:24:15.594 [2024-12-05 11:07:42.551524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:15.594 [2024-12-05 11:07:42.551534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:15.594 [2024-12-05 11:07:42.551545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:15.594 [2024-12-05 11:07:42.551556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:15.594 [2024-12-05 11:07:42.551567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:15.594 11:07:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:24:15.594 11:07:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:15.594 11:07:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:15.853 11:07:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:15.853 11:07:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:24:15.853 11:07:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:15.853 11:07:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:16.112 11:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:16.112 11:07:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:24:17.309 3216.00 IOPS, 12.56 MiB/s [2024-12-05T11:07:44.727Z] 2572.80 IOPS, 10.05 MiB/s [2024-12-05T11:07:44.727Z] [2024-12-05 11:07:44.548532] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.568 [2024-12-05 11:07:44.548601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xff9e50 with addr=10.0.0.2, port=4420 00:24:17.568 [2024-12-05 11:07:44.548615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9e50 is same with the state(6) to be set 00:24:17.568 [2024-12-05 11:07:44.548638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff9e50 (9): Bad file descriptor 00:24:17.568 [2024-12-05 11:07:44.548654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:17.568 [2024-12-05 11:07:44.548663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:17.568 [2024-12-05 11:07:44.548674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:17.568 [2024-12-05 11:07:44.548685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:17.568 [2024-12-05 11:07:44.548696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:19.468 2144.00 IOPS, 8.38 MiB/s [2024-12-05T11:07:46.627Z] 1837.71 IOPS, 7.18 MiB/s [2024-12-05T11:07:46.627Z] [2024-12-05 11:07:46.545562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:19.468 [2024-12-05 11:07:46.545630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:19.468 [2024-12-05 11:07:46.545642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:19.468 [2024-12-05 11:07:46.545654] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:24:19.468 [2024-12-05 11:07:46.545667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:20.402 1608.00 IOPS, 6.28 MiB/s 00:24:20.402 Latency(us) 00:24:20.402 [2024-12-05T11:07:47.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.402 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.402 Verification LBA range: start 0x0 length 0x4000 00:24:20.402 NVMe0n1 : 8.14 1580.14 6.17 15.72 0.00 80110.82 2776.73 7061253.96 00:24:20.402 [2024-12-05T11:07:47.561Z] =================================================================================================================== 00:24:20.402 [2024-12-05T11:07:47.561Z] Total : 1580.14 6.17 15.72 0.00 80110.82 2776.73 7061253.96 00:24:20.402 { 00:24:20.402 "results": [ 00:24:20.402 { 00:24:20.402 "job": "NVMe0n1", 00:24:20.402 "core_mask": "0x4", 00:24:20.402 "workload": "verify", 00:24:20.402 "status": "finished", 00:24:20.402 "verify_range": { 00:24:20.402 "start": 0, 00:24:20.402 "length": 16384 00:24:20.402 }, 00:24:20.402 "queue_depth": 128, 00:24:20.402 "io_size": 4096, 00:24:20.402 "runtime": 8.141026, 00:24:20.402 "iops": 1580.144812214087, 00:24:20.402 "mibps": 6.172440672711278, 00:24:20.402 "io_failed": 128, 00:24:20.402 "io_timeout": 0, 00:24:20.402 "avg_latency_us": 80110.82292519833, 00:24:20.402 "min_latency_us": 2776.7261044176707, 00:24:20.402 "max_latency_us": 7061253.963052209 00:24:20.402 } 00:24:20.402 ], 00:24:20.402 "core_count": 1 00:24:20.402 } 00:24:20.968 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:24:20.968 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.968 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:21.226 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:21.226 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:24:21.226 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:21.226 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82266 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82244 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82244 ']' 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82244 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82244 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82244' 00:24:21.485 killing process with pid 82244 00:24:21.485 Received shutdown signal, test time was about 9.214864 seconds 00:24:21.485 00:24:21.485 Latency(us) 00:24:21.485 [2024-12-05T11:07:48.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.485 [2024-12-05T11:07:48.644Z] =================================================================================================================== 00:24:21.485 [2024-12-05T11:07:48.644Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82244 00:24:21.485 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82244 00:24:21.744 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.003 [2024-12-05 11:07:48.956176] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82384 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82384 /var/tmp/bdevperf.sock 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82384 ']' 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.003 11:07:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:22.003 [2024-12-05 11:07:49.028074] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:24:22.003 [2024-12-05 11:07:49.028185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82384 ] 00:24:22.262 [2024-12-05 11:07:49.170183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.262 [2024-12-05 11:07:49.225009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.262 [2024-12-05 11:07:49.267242] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:22.828 11:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.828 11:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:22.828 11:07:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:23.087 11:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:23.346 NVMe0n1 00:24:23.606 11:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82408 00:24:23.606 11:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:23.606 11:07:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:24:23.606 Running I/O for 10 seconds... 00:24:24.543 11:07:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.804 8049.00 IOPS, 31.44 MiB/s [2024-12-05T11:07:51.963Z] [2024-12-05 11:07:51.736961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.804 [2024-12-05 11:07:51.737026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.804 [2024-12-05 11:07:51.737229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.804 [2024-12-05 11:07:51.737239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.737985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.737995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.805 [2024-12-05 11:07:51.738005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.805 [2024-12-05 11:07:51.738015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.806 [2024-12-05 11:07:51.738794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.806 [2024-12-05 11:07:51.738805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.738980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.738990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.807 [2024-12-05 11:07:51.739601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.807 [2024-12-05 11:07:51.739612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.808 [2024-12-05 11:07:51.739621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.808 [2024-12-05 11:07:51.739631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24ab970 is same with the state(6) to be set 00:24:24.808 [2024-12-05 11:07:51.739644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:24.808 [2024-12-05 11:07:51.739652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:24.808 [2024-12-05 11:07:51.739661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72624 len:8 PRP1 0x0 PRP2 0x0 00:24:24.808 [2024-12-05 11:07:51.739670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.808 [2024-12-05 11:07:51.739806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.808 [2024-12-05 11:07:51.739819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.808 [2024-12-05 11:07:51.739829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.808 [2024-12-05 11:07:51.739838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.808 [2024-12-05 11:07:51.739848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.808 [2024-12-05 11:07:51.739857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.808 [2024-12-05 11:07:51.739867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:24.808 [2024-12-05 11:07:51.739876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.808 [2024-12-05 11:07:51.739885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244be50 is same with the state(6) to be set 00:24:24.808 [2024-12-05 11:07:51.740101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:24.808 [2024-12-05 11:07:51.740132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244be50 (9): Bad file descriptor 00:24:24.808 [2024-12-05 11:07:51.740228] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:24.808 [2024-12-05 11:07:51.740245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244be50 with addr=10.0.0.2, port=4420 00:24:24.808 [2024-12-05 11:07:51.740256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244be50 is same with the state(6) to be set 00:24:24.808 [2024-12-05 11:07:51.740270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244be50 (9): Bad file descriptor 00:24:24.808 [2024-12-05 11:07:51.740295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:24.808 [2024-12-05 11:07:51.740305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:24.808 [2024-12-05 11:07:51.740315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:24.808 [2024-12-05 11:07:51.740325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:24.808 [2024-12-05 11:07:51.740335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:24.808 11:07:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:24:25.742 4475.50 IOPS, 17.48 MiB/s [2024-12-05T11:07:52.901Z] [2024-12-05 11:07:52.738847] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:25.742 [2024-12-05 11:07:52.738913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244be50 with addr=10.0.0.2, port=4420 00:24:25.742 [2024-12-05 11:07:52.738927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244be50 is same with the state(6) to be set 00:24:25.742 [2024-12-05 11:07:52.738950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244be50 (9): Bad file descriptor 00:24:25.742 [2024-12-05 11:07:52.738966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:25.742 [2024-12-05 11:07:52.738976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:25.742 [2024-12-05 11:07:52.738986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:25.742 [2024-12-05 11:07:52.738996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:25.742 [2024-12-05 11:07:52.739006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:25.742 11:07:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:26.000 [2024-12-05 11:07:52.990165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.000 11:07:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82408 00:24:26.833 2983.67 IOPS, 11.65 MiB/s [2024-12-05T11:07:53.992Z] [2024-12-05 11:07:53.750829] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:28.700 2237.75 IOPS, 8.74 MiB/s [2024-12-05T11:07:56.792Z] 3458.60 IOPS, 13.51 MiB/s [2024-12-05T11:07:57.726Z] 4473.33 IOPS, 17.47 MiB/s [2024-12-05T11:07:58.660Z] 5590.86 IOPS, 21.84 MiB/s [2024-12-05T11:08:00.036Z] 6412.00 IOPS, 25.05 MiB/s [2024-12-05T11:08:00.970Z] 7057.33 IOPS, 27.57 MiB/s [2024-12-05T11:08:00.970Z] 7533.60 IOPS, 29.43 MiB/s 00:24:33.811 Latency(us) 00:24:33.811 [2024-12-05T11:08:00.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.811 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.811 Verification LBA range: start 0x0 length 0x4000 00:24:33.811 NVMe0n1 : 10.01 7541.01 29.46 0.00 0.00 16947.50 1039.63 3018551.31 00:24:33.811 [2024-12-05T11:08:00.970Z] =================================================================================================================== 00:24:33.811 [2024-12-05T11:08:00.970Z] Total : 7541.01 29.46 0.00 0.00 16947.50 1039.63 3018551.31 00:24:33.811 { 00:24:33.811 "results": [ 00:24:33.811 { 00:24:33.811 "job": "NVMe0n1", 00:24:33.811 "core_mask": "0x4", 00:24:33.811 "workload": "verify", 00:24:33.811 "status": "finished", 00:24:33.811 "verify_range": { 00:24:33.811 "start": 0, 00:24:33.811 "length": 16384 00:24:33.811 }, 00:24:33.811 "queue_depth": 128, 00:24:33.811 "io_size": 4096, 00:24:33.811 "runtime": 10.007144, 00:24:33.811 "iops": 7541.012700526744, 00:24:33.811 "mibps": 29.457080861432594, 00:24:33.811 "io_failed": 0, 00:24:33.811 "io_timeout": 0, 00:24:33.811 "avg_latency_us": 16947.49874443177, 00:24:33.811 "min_latency_us": 1039.627309236948, 00:24:33.811 "max_latency_us": 3018551.3124497994 00:24:33.811 } 00:24:33.811 ], 00:24:33.811 "core_count": 1 00:24:33.811 } 00:24:33.811 11:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82518 00:24:33.811 11:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.811 11:08:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:24:33.811 Running I/O for 10 seconds... 00:24:34.748 11:08:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.748 10960.00 IOPS, 42.81 MiB/s [2024-12-05T11:08:01.907Z] [2024-12-05 11:08:01.868109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22078c0 is same with the state(6) to be set 00:24:34.748 [2024-12-05 11:08:01.868179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22078c0 is same with the state(6) to be set 00:24:34.748 [2024-12-05 11:08:01.868190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22078c0 is same with the state(6) to be set 00:24:34.748 [2024-12-05 11:08:01.868426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.748 [2024-12-05 11:08:01.868459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.748 [2024-12-05 11:08:01.868489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.748 [2024-12-05 11:08:01.868509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.748 [2024-12-05 11:08:01.868529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.748 [2024-12-05 11:08:01.868831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.748 [2024-12-05 11:08:01.868842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.868850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.868860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.868869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.868879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.868888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.868898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.868906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.868917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.868925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.868935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.868944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.868954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.868962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.868972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.868981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.868991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.749 [2024-12-05 11:08:01.869437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.749 [2024-12-05 11:08:01.869578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.749 [2024-12-05 11:08:01.869586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.869883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.750 [2024-12-05 11:08:01.869902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.750 [2024-12-05 11:08:01.869921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.750 [2024-12-05 11:08:01.869940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.750 [2024-12-05 11:08:01.869959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.750 [2024-12-05 11:08:01.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.869988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.750 [2024-12-05 11:08:01.869997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.750 [2024-12-05 11:08:01.870015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.750 [2024-12-05 11:08:01.870034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.750 [2024-12-05 11:08:01.870380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.750 [2024-12-05 11:08:01.870391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.751 [2024-12-05 11:08:01.870400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.751 [2024-12-05 11:08:01.870419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.751 [2024-12-05 11:08:01.870439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.751 [2024-12-05 11:08:01.870459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.751 [2024-12-05 11:08:01.870479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.751 [2024-12-05 11:08:01.870512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.751 [2024-12-05 11:08:01.870530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.751 [2024-12-05 11:08:01.870549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9fd0 is same with the state(6) to be set 00:24:34.751 [2024-12-05 11:08:01.870570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98936 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99456 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99464 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99472 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99480 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99488 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99496 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99504 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99512 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99520 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99528 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.870968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.870976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.870986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.870993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.871000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99552 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.871008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.871017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.871024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.871031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99560 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.871039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.871048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.871054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.871062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99568 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.871070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.871079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.871086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.871093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99576 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.871101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.871111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.871118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.871126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99584 len:8 PRP1 0x0 PRP2 0x0 00:24:34.751 [2024-12-05 11:08:01.871134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.751 [2024-12-05 11:08:01.871142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.751 [2024-12-05 11:08:01.871151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.751 [2024-12-05 11:08:01.871158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99592 len:8 PRP1 0x0 PRP2 0x0 00:24:34.752 [2024-12-05 11:08:01.871167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.752 [2024-12-05 11:08:01.871176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.752 [2024-12-05 11:08:01.871182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.752 [2024-12-05 11:08:01.885709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99600 len:8 PRP1 0x0 PRP2 0x0 00:24:34.752 [2024-12-05 11:08:01.885762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.752 [2024-12-05 11:08:01.885788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.752 [2024-12-05 11:08:01.885799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.752 [2024-12-05 11:08:01.885809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:24:34.752 [2024-12-05 11:08:01.885822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.752 [2024-12-05 11:08:01.886058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.752 11:08:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:24:34.752 [2024-12-05 11:08:01.886078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.752 [2024-12-05 11:08:01.886096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.752 [2024-12-05 11:08:01.886108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.752 [2024-12-05 11:08:01.886120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.752 [2024-12-05 11:08:01.886131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.752 [2024-12-05 11:08:01.886161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:34.752 [2024-12-05 11:08:01.886173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.752 [2024-12-05 11:08:01.886184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244be50 is same with the state(6) to be set 00:24:34.752 [2024-12-05 11:08:01.886451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:34.752 [2024-12-05 11:08:01.886476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244be50 (9): Bad file descriptor 00:24:34.752 [2024-12-05 11:08:01.886596] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.752 [2024-12-05 11:08:01.886615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244be50 with addr=10.0.0.2, port=4420 00:24:34.752 [2024-12-05 11:08:01.886628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244be50 is same with the state(6) to be set 00:24:34.752 [2024-12-05 11:08:01.886646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244be50 (9): Bad file descriptor 00:24:34.752 [2024-12-05 11:08:01.886663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:34.752 [2024-12-05 11:08:01.886676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:34.752 [2024-12-05 11:08:01.886690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:34.752 [2024-12-05 11:08:01.886702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:34.752 [2024-12-05 11:08:01.886715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:35.944 6162.00 IOPS, 24.07 MiB/s [2024-12-05T11:08:03.103Z] [2024-12-05 11:08:02.885220] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.944 [2024-12-05 11:08:02.885291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244be50 with addr=10.0.0.2, port=4420 00:24:35.944 [2024-12-05 11:08:02.885305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244be50 is same with the state(6) to be set 00:24:35.944 [2024-12-05 11:08:02.885326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244be50 (9): Bad file descriptor 00:24:35.944 [2024-12-05 11:08:02.885342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:35.944 [2024-12-05 11:08:02.885352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:35.944 [2024-12-05 11:08:02.885363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:35.944 [2024-12-05 11:08:02.885373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:35.944 [2024-12-05 11:08:02.885383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:36.879 4108.00 IOPS, 16.05 MiB/s [2024-12-05T11:08:04.038Z] [2024-12-05 11:08:03.883902] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.879 [2024-12-05 11:08:03.883966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244be50 with addr=10.0.0.2, port=4420 00:24:36.879 [2024-12-05 11:08:03.883980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244be50 is same with the state(6) to be set 00:24:36.879 [2024-12-05 11:08:03.884003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244be50 (9): Bad file descriptor 00:24:36.879 [2024-12-05 11:08:03.884019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:36.879 [2024-12-05 11:08:03.884028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:36.879 [2024-12-05 11:08:03.884039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:36.879 [2024-12-05 11:08:03.884050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:36.879 [2024-12-05 11:08:03.884062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:37.812 3081.00 IOPS, 12.04 MiB/s [2024-12-05T11:08:04.971Z] [2024-12-05 11:08:04.885012] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.812 [2024-12-05 11:08:04.885080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244be50 with addr=10.0.0.2, port=4420 00:24:37.812 [2024-12-05 11:08:04.885096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244be50 is same with the state(6) to be set 00:24:37.812 [2024-12-05 11:08:04.885297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244be50 (9): Bad file descriptor 00:24:37.812 [2024-12-05 11:08:04.885492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:24:37.812 [2024-12-05 11:08:04.885503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:24:37.812 [2024-12-05 11:08:04.885513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:37.812 [2024-12-05 11:08:04.885524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:24:37.812 [2024-12-05 11:08:04.885536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:37.812 11:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.069 [2024-12-05 11:08:05.091248] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.069 11:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82518 00:24:38.890 2464.80 IOPS, 9.63 MiB/s [2024-12-05T11:08:06.049Z] [2024-12-05 11:08:05.915197] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:24:40.773 3731.67 IOPS, 14.58 MiB/s [2024-12-05T11:08:08.866Z] 4968.86 IOPS, 19.41 MiB/s [2024-12-05T11:08:09.801Z] 5891.75 IOPS, 23.01 MiB/s [2024-12-05T11:08:11.179Z] 6612.22 IOPS, 25.83 MiB/s [2024-12-05T11:08:11.179Z] 7183.80 IOPS, 28.06 MiB/s 00:24:44.020 Latency(us) 00:24:44.020 [2024-12-05T11:08:11.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.020 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.020 Verification LBA range: start 0x0 length 0x4000 00:24:44.020 NVMe0n1 : 10.01 7188.58 28.08 5151.54 0.00 10352.92 477.04 3032026.99 00:24:44.020 [2024-12-05T11:08:11.179Z] =================================================================================================================== 00:24:44.020 [2024-12-05T11:08:11.179Z] Total : 7188.58 28.08 5151.54 0.00 10352.92 0.00 3032026.99 00:24:44.020 { 00:24:44.020 "results": [ 00:24:44.020 { 00:24:44.020 "job": "NVMe0n1", 00:24:44.020 "core_mask": "0x4", 00:24:44.020 "workload": "verify", 00:24:44.020 "status": "finished", 00:24:44.020 "verify_range": { 00:24:44.020 "start": 0, 00:24:44.020 "length": 16384 00:24:44.020 }, 00:24:44.020 "queue_depth": 128, 00:24:44.020 "io_size": 4096, 00:24:44.020 "runtime": 10.00671, 00:24:44.020 "iops": 7188.576465191856, 00:24:44.020 "mibps": 28.08037681715569, 00:24:44.020 "io_failed": 51550, 00:24:44.020 "io_timeout": 0, 00:24:44.020 "avg_latency_us": 10352.92297453882, 00:24:44.020 "min_latency_us": 477.04417670682733, 00:24:44.020 "max_latency_us": 3032026.987951807 00:24:44.020 } 00:24:44.020 ], 00:24:44.020 "core_count": 1 00:24:44.020 } 00:24:44.020 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82384 00:24:44.020 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82384 ']' 00:24:44.020 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82384 00:24:44.020 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:44.020 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.020 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82384 00:24:44.020 killing process with pid 82384 00:24:44.020 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.020 00:24:44.020 Latency(us) 00:24:44.020 [2024-12-05T11:08:11.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.020 [2024-12-05T11:08:11.179Z] =================================================================================================================== 00:24:44.020 [2024-12-05T11:08:11.179Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.020 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:44.020 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82384' 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82384 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82384 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82632 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82632 /var/tmp/bdevperf.sock 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82632 ']' 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.021 11:08:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:44.021 [2024-12-05 11:08:11.024522] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:24:44.021 [2024-12-05 11:08:11.024605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82632 ] 00:24:44.021 [2024-12-05 11:08:11.160830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.279 [2024-12-05 11:08:11.212212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.279 [2024-12-05 11:08:11.253180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:44.844 11:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.844 11:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:24:44.844 11:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82632 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:44.844 11:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82648 00:24:44.844 11:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:45.101 11:08:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:45.359 NVMe0n1 00:24:45.359 11:08:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.359 11:08:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82684 00:24:45.359 11:08:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:24:45.617 Running I/O for 10 seconds... 00:24:46.553 11:08:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.553 19558.00 IOPS, 76.40 MiB/s [2024-12-05T11:08:13.712Z] [2024-12-05 11:08:13.669686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.553 [2024-12-05 11:08:13.669775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.553 [2024-12-05 11:08:13.669791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.553 [2024-12-05 11:08:13.669807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.553 [2024-12-05 11:08:13.669816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.553 [2024-12-05 11:08:13.669824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.553 [2024-12-05 11:08:13.669833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.553 [2024-12-05 11:08:13.669841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-05 11:08:13.669849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.553 the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with [2024-12-05 11:08:13.669858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb68e50 is same wthe state(6) to be set 00:24:46.553 ith the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.669996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.553 [2024-12-05 11:08:13.670083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.554 [2024-12-05 11:08:13.670725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.555 [2024-12-05 11:08:13.670733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215c50 is same with the state(6) to be set 00:24:46.555 [2024-12-05 11:08:13.670789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.670988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.670996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.555 [2024-12-05 11:08:13.671484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.555 [2024-12-05 11:08:13.671493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.671981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.671989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.556 [2024-12-05 11:08:13.672223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.556 [2024-12-05 11:08:13.672231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.557 [2024-12-05 11:08:13.672844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.557 [2024-12-05 11:08:13.672852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.672862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.672871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.672881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.672889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.672901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.672909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.672919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.672928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.672938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.672947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.672957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.672966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.672976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.672985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.672994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.558 [2024-12-05 11:08:13.673188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbd5e20 is same with the state(6) to be set 00:24:46.558 [2024-12-05 11:08:13.673210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:46.558 [2024-12-05 11:08:13.673217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:46.558 [2024-12-05 11:08:13.673224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113280 len:8 PRP1 0x0 PRP2 0x0 00:24:46.558 [2024-12-05 11:08:13.673233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.558 [2024-12-05 11:08:13.673497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:46.558 [2024-12-05 11:08:13.673517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb68e50 (9): Bad file descriptor 00:24:46.558 [2024-12-05 11:08:13.673613] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.558 [2024-12-05 11:08:13.673633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb68e50 with addr=10.0.0.2, port=4420 00:24:46.558 [2024-12-05 11:08:13.673643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb68e50 is same with the state(6) to be set 00:24:46.558 [2024-12-05 11:08:13.673661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb68e50 (9): Bad file descriptor 00:24:46.559 [2024-12-05 11:08:13.673674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:46.559 [2024-12-05 11:08:13.673683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:46.559 [2024-12-05 11:08:13.673693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:46.559 [2024-12-05 11:08:13.673702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:46.559 [2024-12-05 11:08:13.673712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:46.559 11:08:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82684 00:24:48.425 10859.50 IOPS, 42.42 MiB/s [2024-12-05T11:08:15.842Z] 7239.67 IOPS, 28.28 MiB/s [2024-12-05T11:08:15.842Z] [2024-12-05 11:08:15.670693] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:48.683 [2024-12-05 11:08:15.670752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb68e50 with addr=10.0.0.2, port=4420 00:24:48.683 [2024-12-05 11:08:15.670766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb68e50 is same with the state(6) to be set 00:24:48.683 [2024-12-05 11:08:15.670791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb68e50 (9): Bad file descriptor 00:24:48.683 [2024-12-05 11:08:15.670808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:48.683 [2024-12-05 11:08:15.670817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:48.683 [2024-12-05 11:08:15.670828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:48.683 [2024-12-05 11:08:15.670838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:48.683 [2024-12-05 11:08:15.670849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:50.550 5429.75 IOPS, 21.21 MiB/s [2024-12-05T11:08:17.709Z] 4343.80 IOPS, 16.97 MiB/s [2024-12-05T11:08:17.709Z] [2024-12-05 11:08:17.667779] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.550 [2024-12-05 11:08:17.667823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb68e50 with addr=10.0.0.2, port=4420 00:24:50.550 [2024-12-05 11:08:17.667837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb68e50 is same with the state(6) to be set 00:24:50.550 [2024-12-05 11:08:17.667858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb68e50 (9): Bad file descriptor 00:24:50.550 [2024-12-05 11:08:17.667874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:50.550 [2024-12-05 11:08:17.667884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:50.550 [2024-12-05 11:08:17.667895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:50.550 [2024-12-05 11:08:17.667905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:50.550 [2024-12-05 11:08:17.667916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:52.434 3619.83 IOPS, 14.14 MiB/s [2024-12-05T11:08:19.851Z] 3102.71 IOPS, 12.12 MiB/s [2024-12-05T11:08:19.851Z] [2024-12-05 11:08:19.664809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:52.692 [2024-12-05 11:08:19.664869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:52.692 [2024-12-05 11:08:19.664881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:52.692 [2024-12-05 11:08:19.664890] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:24:52.692 [2024-12-05 11:08:19.664902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:53.627 2714.88 IOPS, 10.60 MiB/s 00:24:53.627 Latency(us) 00:24:53.627 [2024-12-05T11:08:20.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.627 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:53.627 NVMe0n1 : 8.12 2674.09 10.45 15.76 0.00 47713.29 6422.00 7061253.96 00:24:53.627 [2024-12-05T11:08:20.786Z] =================================================================================================================== 00:24:53.627 [2024-12-05T11:08:20.786Z] Total : 2674.09 10.45 15.76 0.00 47713.29 6422.00 7061253.96 00:24:53.627 { 00:24:53.627 "results": [ 00:24:53.627 { 00:24:53.627 "job": "NVMe0n1", 00:24:53.627 "core_mask": "0x4", 00:24:53.627 "workload": "randread", 00:24:53.627 "status": "finished", 00:24:53.627 "queue_depth": 128, 00:24:53.627 "io_size": 4096, 00:24:53.627 "runtime": 8.122021, 00:24:53.627 "iops": 2674.088136437963, 00:24:53.627 "mibps": 10.445656782960793, 00:24:53.627 "io_failed": 128, 00:24:53.627 "io_timeout": 0, 00:24:53.627 "avg_latency_us": 47713.294101898515, 00:24:53.627 "min_latency_us": 6422.001606425702, 00:24:53.627 "max_latency_us": 7061253.963052209 00:24:53.627 } 00:24:53.627 ], 00:24:53.627 "core_count": 1 00:24:53.627 } 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:53.627 Attaching 5 probes... 00:24:53.627 1127.002699: reset bdev controller NVMe0 00:24:53.627 1127.059877: reconnect bdev controller NVMe0 00:24:53.627 3124.091680: reconnect delay bdev controller NVMe0 00:24:53.627 3124.109841: reconnect bdev controller NVMe0 00:24:53.627 5121.192957: reconnect delay bdev controller NVMe0 00:24:53.627 5121.210933: reconnect bdev controller NVMe0 00:24:53.627 7118.299733: reconnect delay bdev controller NVMe0 00:24:53.627 7118.319558: reconnect bdev controller NVMe0 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82648 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82632 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82632 ']' 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82632 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82632 00:24:53.627 killing process with pid 82632 00:24:53.627 Received shutdown signal, test time was about 8.216519 seconds 00:24:53.627 00:24:53.627 Latency(us) 00:24:53.627 [2024-12-05T11:08:20.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.627 [2024-12-05T11:08:20.786Z] =================================================================================================================== 00:24:53.627 [2024-12-05T11:08:20.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82632' 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82632 00:24:53.627 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82632 00:24:53.886 11:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@99 -- # sync 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@102 -- # set +e 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:54.144 rmmod nvme_tcp 00:24:54.144 rmmod nvme_fabrics 00:24:54.144 rmmod nvme_keyring 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@106 -- # set -e 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@107 -- # return 0 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@336 -- # '[' -n 82189 ']' 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@337 -- # killprocess 82189 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82189 ']' 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82189 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82189 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.144 killing process with pid 82189 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82189' 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82189 00:24:54.144 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82189 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@342 -- # nvmf_fini 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@254 -- # local dev 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:54.403 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # continue 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # continue 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@41 -- # _dev=0 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@41 -- # dev_map=() 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@274 -- # iptr 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # iptables-save 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # iptables-restore 00:24:54.663 00:24:54.663 real 0m46.962s 00:24:54.663 user 2m14.984s 00:24:54.663 sys 0m7.149s 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:54.663 ************************************ 00:24:54.663 END TEST nvmf_timeout 00:24:54.663 ************************************ 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:54.663 00:24:54.663 real 5m9.273s 00:24:54.663 user 12m54.527s 00:24:54.663 sys 1m27.710s 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.663 11:08:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.663 ************************************ 00:24:54.663 END TEST nvmf_host 00:24:54.663 ************************************ 00:24:54.663 11:08:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:24:54.663 11:08:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:24:54.663 00:24:54.663 real 12m40.258s 00:24:54.663 user 29m15.978s 00:24:54.663 sys 3m52.869s 00:24:54.663 11:08:21 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.663 11:08:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.663 ************************************ 00:24:54.663 END TEST nvmf_tcp 00:24:54.663 ************************************ 00:24:54.923 11:08:21 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:24:54.923 11:08:21 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:54.923 11:08:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:54.923 11:08:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.923 11:08:21 -- common/autotest_common.sh@10 -- # set +x 00:24:54.923 ************************************ 00:24:54.923 START TEST nvmf_dif 00:24:54.923 ************************************ 00:24:54.923 11:08:21 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:54.923 * Looking for test storage... 00:24:54.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:54.923 11:08:21 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:54.923 11:08:21 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:24:54.923 11:08:21 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:54.923 11:08:22 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.923 11:08:22 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:24:54.923 11:08:22 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.923 11:08:22 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:54.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.923 --rc genhtml_branch_coverage=1 00:24:54.923 --rc genhtml_function_coverage=1 00:24:54.923 --rc genhtml_legend=1 00:24:54.923 --rc geninfo_all_blocks=1 00:24:54.923 --rc geninfo_unexecuted_blocks=1 00:24:54.923 00:24:54.923 ' 00:24:54.923 11:08:22 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:54.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.923 --rc genhtml_branch_coverage=1 00:24:54.923 --rc genhtml_function_coverage=1 00:24:54.923 --rc genhtml_legend=1 00:24:54.923 --rc geninfo_all_blocks=1 00:24:54.923 --rc geninfo_unexecuted_blocks=1 00:24:54.923 00:24:54.923 ' 00:24:54.923 11:08:22 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:54.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.923 --rc genhtml_branch_coverage=1 00:24:54.923 --rc genhtml_function_coverage=1 00:24:54.923 --rc genhtml_legend=1 00:24:54.923 --rc geninfo_all_blocks=1 00:24:54.923 --rc geninfo_unexecuted_blocks=1 00:24:54.923 00:24:54.923 ' 00:24:54.923 11:08:22 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:54.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.923 --rc genhtml_branch_coverage=1 00:24:54.923 --rc genhtml_function_coverage=1 00:24:54.923 --rc genhtml_legend=1 00:24:54.923 --rc geninfo_all_blocks=1 00:24:54.923 --rc geninfo_unexecuted_blocks=1 00:24:54.923 00:24:54.923 ' 00:24:54.923 11:08:22 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:54.924 11:08:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:55.183 11:08:22 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.184 11:08:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:24:55.184 11:08:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.184 11:08:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.184 11:08:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.184 11:08:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.184 11:08:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.184 11:08:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.184 11:08:22 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:55.184 11:08:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:55.184 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:55.184 11:08:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:55.184 11:08:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:55.184 11:08:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:55.184 11:08:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:55.184 11:08:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:55.184 11:08:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:24:55.184 11:08:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@223 -- # create_target_ns 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@121 -- # return 0 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:55.184 11:08:22 nvmf_dif -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@151 -- # set_up target0 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:55.184 11:08:22 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:55.185 10.0.0.1 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:55.185 10.0.0.2 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:55.185 11:08:22 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:55.185 11:08:22 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@151 -- # set_up target1 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772163 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:55.445 10.0.0.3 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772164 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:55.445 11:08:22 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:55.446 10.0.0.4 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:55.446 11:08:22 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:55.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:24:55.446 00:24:55.446 --- 10.0.0.1 ping statistics --- 00:24:55.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.446 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@101 -- # echo target0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target0 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:55.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:24:55.446 00:24:55.446 --- 10.0.0.2 ping statistics --- 00:24:55.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.446 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:55.446 11:08:22 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:55.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:55.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:24:55.705 00:24:55.705 --- 10.0.0.3 ping statistics --- 00:24:55.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.705 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@101 -- # echo target1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:55.705 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:55.705 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:24:55.705 00:24:55.705 --- 10.0.0.4 ping statistics --- 00:24:55.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.705 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:55.705 11:08:22 nvmf_dif -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.705 11:08:22 nvmf_dif -- nvmf/common.sh@281 -- # return 0 00:24:55.705 11:08:22 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:24:55.705 11:08:22 nvmf_dif -- nvmf/common.sh@299 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:56.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:56.272 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:56.272 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@101 -- # echo target0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target0 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@101 -- # echo target1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target1 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:56.272 11:08:23 nvmf_dif -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:56.272 11:08:23 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:56.272 11:08:23 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:56.272 11:08:23 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.272 11:08:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=83180 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:56.272 11:08:23 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 83180 00:24:56.272 11:08:23 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83180 ']' 00:24:56.272 11:08:23 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.272 11:08:23 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.272 11:08:23 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.272 11:08:23 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.272 11:08:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:56.532 [2024-12-05 11:08:23.443097] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:24:56.532 [2024-12-05 11:08:23.443159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.532 [2024-12-05 11:08:23.598654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.532 [2024-12-05 11:08:23.641212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.532 [2024-12-05 11:08:23.641260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.532 [2024-12-05 11:08:23.641270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.532 [2024-12-05 11:08:23.641290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.532 [2024-12-05 11:08:23.641298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.532 [2024-12-05 11:08:23.641579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.532 [2024-12-05 11:08:23.685197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:24:57.517 11:08:24 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:57.517 11:08:24 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.517 11:08:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:57.517 11:08:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:57.517 [2024-12-05 11:08:24.378393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.517 11:08:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.517 11:08:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:57.517 ************************************ 00:24:57.517 START TEST fio_dif_1_default 00:24:57.517 ************************************ 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:57.517 bdev_null0 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:57.517 [2024-12-05 11:08:24.442444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.517 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:57.518 { 00:24:57.518 "params": { 00:24:57.518 "name": "Nvme$subsystem", 00:24:57.518 "trtype": "$TEST_TRANSPORT", 00:24:57.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.518 "adrfam": "ipv4", 00:24:57.518 "trsvcid": "$NVMF_PORT", 00:24:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.518 "hdgst": ${hdgst:-false}, 00:24:57.518 "ddgst": ${ddgst:-false} 00:24:57.518 }, 00:24:57.518 "method": "bdev_nvme_attach_controller" 00:24:57.518 } 00:24:57.518 EOF 00:24:57.518 )") 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:57.518 "params": { 00:24:57.518 "name": "Nvme0", 00:24:57.518 "trtype": "tcp", 00:24:57.518 "traddr": "10.0.0.2", 00:24:57.518 "adrfam": "ipv4", 00:24:57.518 "trsvcid": "4420", 00:24:57.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:57.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:57.518 "hdgst": false, 00:24:57.518 "ddgst": false 00:24:57.518 }, 00:24:57.518 "method": "bdev_nvme_attach_controller" 00:24:57.518 }' 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:57.518 11:08:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:57.816 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:57.816 fio-3.35 00:24:57.816 Starting 1 thread 00:25:10.058 00:25:10.058 filename0: (groupid=0, jobs=1): err= 0: pid=83246: Thu Dec 5 11:08:35 2024 00:25:10.058 read: IOPS=11.9k, BW=46.5MiB/s (48.8MB/s)(465MiB/10001msec) 00:25:10.058 slat (nsec): min=5639, max=72947, avg=6173.93, stdev=1302.19 00:25:10.058 clat (usec): min=286, max=2317, avg=318.52, stdev=25.49 00:25:10.058 lat (usec): min=292, max=2351, avg=324.70, stdev=25.71 00:25:10.058 clat percentiles (usec): 00:25:10.058 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:25:10.058 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 314], 60.00th=[ 318], 00:25:10.058 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 334], 95.00th=[ 343], 00:25:10.058 | 99.00th=[ 420], 99.50th=[ 453], 99.90th=[ 502], 99.95th=[ 529], 00:25:10.058 | 99.99th=[ 848] 00:25:10.058 bw ( KiB/s): min=46272, max=48480, per=100.00%, avg=47705.26, stdev=565.09, samples=19 00:25:10.058 iops : min=11568, max=12120, avg=11926.32, stdev=141.27, samples=19 00:25:10.058 lat (usec) : 500=99.90%, 750=0.08%, 1000=0.02% 00:25:10.058 lat (msec) : 2=0.01%, 4=0.01% 00:25:10.058 cpu : usr=82.32%, sys=16.12%, ctx=19, majf=0, minf=9 00:25:10.058 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:10.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:10.058 issued rwts: total=119164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:10.058 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:10.058 00:25:10.058 Run status group 0 (all jobs): 00:25:10.058 READ: bw=46.5MiB/s (48.8MB/s), 46.5MiB/s-46.5MiB/s (48.8MB/s-48.8MB/s), io=465MiB (488MB), run=10001-10001msec 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 00:25:10.058 real 0m11.000s 00:25:10.058 user 0m8.850s 00:25:10.058 sys 0m1.905s 00:25:10.058 ************************************ 00:25:10.058 END TEST fio_dif_1_default 00:25:10.058 ************************************ 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 11:08:35 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:10.058 11:08:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:10.058 11:08:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 ************************************ 00:25:10.058 START TEST fio_dif_1_multi_subsystems 00:25:10.058 ************************************ 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 bdev_null0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 [2024-12-05 11:08:35.527786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 bdev_null1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:10.058 { 00:25:10.058 "params": { 00:25:10.058 "name": "Nvme$subsystem", 00:25:10.058 "trtype": "$TEST_TRANSPORT", 00:25:10.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.058 "adrfam": "ipv4", 00:25:10.058 "trsvcid": "$NVMF_PORT", 00:25:10.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.058 "hdgst": ${hdgst:-false}, 00:25:10.058 "ddgst": ${ddgst:-false} 00:25:10.058 }, 00:25:10.058 "method": "bdev_nvme_attach_controller" 00:25:10.058 } 00:25:10.058 EOF 00:25:10.058 )") 00:25:10.058 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:10.059 { 00:25:10.059 "params": { 00:25:10.059 "name": "Nvme$subsystem", 00:25:10.059 "trtype": "$TEST_TRANSPORT", 00:25:10.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:10.059 "adrfam": "ipv4", 00:25:10.059 "trsvcid": "$NVMF_PORT", 00:25:10.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:10.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:10.059 "hdgst": ${hdgst:-false}, 00:25:10.059 "ddgst": ${ddgst:-false} 00:25:10.059 }, 00:25:10.059 "method": "bdev_nvme_attach_controller" 00:25:10.059 } 00:25:10.059 EOF 00:25:10.059 )") 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:10.059 "params": { 00:25:10.059 "name": "Nvme0", 00:25:10.059 "trtype": "tcp", 00:25:10.059 "traddr": "10.0.0.2", 00:25:10.059 "adrfam": "ipv4", 00:25:10.059 "trsvcid": "4420", 00:25:10.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:10.059 "hdgst": false, 00:25:10.059 "ddgst": false 00:25:10.059 }, 00:25:10.059 "method": "bdev_nvme_attach_controller" 00:25:10.059 },{ 00:25:10.059 "params": { 00:25:10.059 "name": "Nvme1", 00:25:10.059 "trtype": "tcp", 00:25:10.059 "traddr": "10.0.0.2", 00:25:10.059 "adrfam": "ipv4", 00:25:10.059 "trsvcid": "4420", 00:25:10.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:10.059 "hdgst": false, 00:25:10.059 "ddgst": false 00:25:10.059 }, 00:25:10.059 "method": "bdev_nvme_attach_controller" 00:25:10.059 }' 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:10.059 11:08:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:10.059 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:10.059 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:10.059 fio-3.35 00:25:10.059 Starting 2 threads 00:25:20.045 00:25:20.045 filename0: (groupid=0, jobs=1): err= 0: pid=83410: Thu Dec 5 11:08:46 2024 00:25:20.045 read: IOPS=6213, BW=24.3MiB/s (25.5MB/s)(243MiB/10001msec) 00:25:20.045 slat (usec): min=5, max=257, avg=11.38, stdev= 3.61 00:25:20.045 clat (usec): min=343, max=2035, avg=613.96, stdev=41.64 00:25:20.045 lat (usec): min=349, max=2046, avg=625.34, stdev=42.46 00:25:20.045 clat percentiles (usec): 00:25:20.045 | 1.00th=[ 537], 5.00th=[ 553], 10.00th=[ 570], 20.00th=[ 586], 00:25:20.045 | 30.00th=[ 594], 40.00th=[ 603], 50.00th=[ 611], 60.00th=[ 619], 00:25:20.045 | 70.00th=[ 627], 80.00th=[ 635], 90.00th=[ 652], 95.00th=[ 676], 00:25:20.045 | 99.00th=[ 742], 99.50th=[ 799], 99.90th=[ 898], 99.95th=[ 963], 00:25:20.045 | 99.99th=[ 1221] 00:25:20.045 bw ( KiB/s): min=23776, max=25312, per=50.02%, avg=24869.05, stdev=374.29, samples=19 00:25:20.045 iops : min= 5944, max= 6328, avg=6217.26, stdev=93.57, samples=19 00:25:20.045 lat (usec) : 500=0.21%, 750=98.93%, 1000=0.82% 00:25:20.045 lat (msec) : 2=0.04%, 4=0.01% 00:25:20.045 cpu : usr=88.03%, sys=10.74%, ctx=117, majf=0, minf=0 00:25:20.045 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:20.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.045 issued rwts: total=62143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.045 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:20.045 filename1: (groupid=0, jobs=1): err= 0: pid=83411: Thu Dec 5 11:08:46 2024 00:25:20.045 read: IOPS=6214, BW=24.3MiB/s (25.5MB/s)(243MiB/10001msec) 00:25:20.045 slat (usec): min=5, max=177, avg=11.22, stdev= 3.05 00:25:20.045 clat (usec): min=311, max=2037, avg=614.03, stdev=36.35 00:25:20.045 lat (usec): min=316, max=2048, avg=625.25, stdev=36.57 00:25:20.045 clat percentiles (usec): 00:25:20.045 | 1.00th=[ 562], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 586], 00:25:20.045 | 30.00th=[ 594], 40.00th=[ 603], 50.00th=[ 611], 60.00th=[ 619], 00:25:20.045 | 70.00th=[ 627], 80.00th=[ 635], 90.00th=[ 652], 95.00th=[ 668], 00:25:20.045 | 99.00th=[ 734], 99.50th=[ 783], 99.90th=[ 889], 99.95th=[ 922], 00:25:20.045 | 99.99th=[ 1188] 00:25:20.045 bw ( KiB/s): min=23776, max=25312, per=50.03%, avg=24874.11, stdev=378.04, samples=19 00:25:20.045 iops : min= 5944, max= 6328, avg=6218.53, stdev=94.51, samples=19 00:25:20.045 lat (usec) : 500=0.03%, 750=99.25%, 1000=0.68% 00:25:20.045 lat (msec) : 2=0.03%, 4=0.01% 00:25:20.045 cpu : usr=88.06%, sys=10.98%, ctx=14, majf=0, minf=0 00:25:20.045 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:20.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:20.045 issued rwts: total=62156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:20.045 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:20.045 00:25:20.045 Run status group 0 (all jobs): 00:25:20.045 READ: bw=48.5MiB/s (50.9MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=486MiB (509MB), run=10001-10001msec 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 ************************************ 00:25:20.045 END TEST fio_dif_1_multi_subsystems 00:25:20.045 ************************************ 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.045 00:25:20.045 real 0m11.188s 00:25:20.045 user 0m18.414s 00:25:20.045 sys 0m2.522s 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.045 11:08:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 11:08:46 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:20.045 11:08:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:20.045 11:08:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.045 11:08:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 ************************************ 00:25:20.045 START TEST fio_dif_rand_params 00:25:20.045 ************************************ 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 bdev_null0 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.045 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:20.046 [2024-12-05 11:08:46.799478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:20.046 { 00:25:20.046 "params": { 00:25:20.046 "name": "Nvme$subsystem", 00:25:20.046 "trtype": "$TEST_TRANSPORT", 00:25:20.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.046 "adrfam": "ipv4", 00:25:20.046 "trsvcid": "$NVMF_PORT", 00:25:20.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.046 "hdgst": ${hdgst:-false}, 00:25:20.046 "ddgst": ${ddgst:-false} 00:25:20.046 }, 00:25:20.046 "method": "bdev_nvme_attach_controller" 00:25:20.046 } 00:25:20.046 EOF 00:25:20.046 )") 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:20.046 "params": { 00:25:20.046 "name": "Nvme0", 00:25:20.046 "trtype": "tcp", 00:25:20.046 "traddr": "10.0.0.2", 00:25:20.046 "adrfam": "ipv4", 00:25:20.046 "trsvcid": "4420", 00:25:20.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.046 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:20.046 "hdgst": false, 00:25:20.046 "ddgst": false 00:25:20.046 }, 00:25:20.046 "method": "bdev_nvme_attach_controller" 00:25:20.046 }' 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:20.046 11:08:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.046 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:20.046 ... 00:25:20.046 fio-3.35 00:25:20.046 Starting 3 threads 00:25:26.630 00:25:26.630 filename0: (groupid=0, jobs=1): err= 0: pid=83568: Thu Dec 5 11:08:52 2024 00:25:26.630 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(197MiB/5009msec) 00:25:26.630 slat (nsec): min=5964, max=33946, avg=10192.09, stdev=4738.96 00:25:26.630 clat (usec): min=4139, max=12382, avg=9496.72, stdev=431.34 00:25:26.630 lat (usec): min=4146, max=12396, avg=9506.91, stdev=431.65 00:25:26.630 clat percentiles (usec): 00:25:26.630 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9241], 00:25:26.630 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9503], 00:25:26.630 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10159], 00:25:26.630 | 99.00th=[10552], 99.50th=[11600], 99.90th=[12387], 99.95th=[12387], 00:25:26.630 | 99.99th=[12387] 00:25:26.630 bw ( KiB/s): min=39168, max=41472, per=33.34%, avg=40303.40, stdev=805.23, samples=10 00:25:26.630 iops : min= 306, max= 324, avg=314.80, stdev= 6.20, samples=10 00:25:26.630 lat (msec) : 10=91.57%, 20=8.43% 00:25:26.630 cpu : usr=88.68%, sys=10.86%, ctx=7, majf=0, minf=0 00:25:26.630 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:26.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.630 issued rwts: total=1578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:26.630 filename0: (groupid=0, jobs=1): err= 0: pid=83569: Thu Dec 5 11:08:52 2024 00:25:26.630 read: IOPS=314, BW=39.4MiB/s (41.3MB/s)(197MiB/5003msec) 00:25:26.630 slat (nsec): min=6059, max=37511, avg=13102.40, stdev=5121.86 00:25:26.630 clat (usec): min=9096, max=13375, avg=9497.57, stdev=362.43 00:25:26.630 lat (usec): min=9106, max=13393, avg=9510.68, stdev=363.50 00:25:26.630 clat percentiles (usec): 00:25:26.630 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9241], 00:25:26.630 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9503], 00:25:26.630 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[ 9896], 95.00th=[10159], 00:25:26.630 | 99.00th=[10552], 99.50th=[10683], 99.90th=[13304], 99.95th=[13435], 00:25:26.630 | 99.99th=[13435] 00:25:26.630 bw ( KiB/s): min=38400, max=41472, per=33.32%, avg=40277.33, stdev=1024.00, samples=9 00:25:26.630 iops : min= 300, max= 324, avg=314.67, stdev= 8.00, samples=9 00:25:26.630 lat (msec) : 10=91.68%, 20=8.32% 00:25:26.630 cpu : usr=89.18%, sys=10.00%, ctx=66, majf=0, minf=0 00:25:26.630 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:26.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.630 issued rwts: total=1575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:26.630 filename0: (groupid=0, jobs=1): err= 0: pid=83570: Thu Dec 5 11:08:52 2024 00:25:26.630 read: IOPS=315, BW=39.4MiB/s (41.3MB/s)(197MiB/5005msec) 00:25:26.630 slat (nsec): min=5857, max=36020, avg=11982.48, stdev=4956.67 00:25:26.630 clat (usec): min=4417, max=13378, avg=9485.66, stdev=432.03 00:25:26.630 lat (usec): min=4423, max=13394, avg=9497.64, stdev=433.00 00:25:26.630 clat percentiles (usec): 00:25:26.630 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9110], 20.00th=[ 9241], 00:25:26.630 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9372], 60.00th=[ 9503], 00:25:26.630 | 70.00th=[ 9634], 80.00th=[ 9765], 90.00th=[ 9896], 95.00th=[10159], 00:25:26.630 | 99.00th=[10552], 99.50th=[10683], 99.90th=[13304], 99.95th=[13435], 00:25:26.630 | 99.99th=[13435] 00:25:26.630 bw ( KiB/s): min=38400, max=41472, per=33.32%, avg=40277.33, stdev=1024.00, samples=9 00:25:26.630 iops : min= 300, max= 324, avg=314.67, stdev= 8.00, samples=9 00:25:26.630 lat (msec) : 10=91.70%, 20=8.30% 00:25:26.630 cpu : usr=89.11%, sys=10.41%, ctx=3, majf=0, minf=0 00:25:26.630 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:26.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.630 issued rwts: total=1578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.630 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:26.630 00:25:26.630 Run status group 0 (all jobs): 00:25:26.630 READ: bw=118MiB/s (124MB/s), 39.4MiB/s-39.4MiB/s (41.3MB/s-41.3MB/s), io=591MiB (620MB), run=5003-5009msec 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 bdev_null0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 [2024-12-05 11:08:52.855700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 bdev_null1 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:26.630 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.631 bdev_null2 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:26.631 { 00:25:26.631 "params": { 00:25:26.631 "name": "Nvme$subsystem", 00:25:26.631 "trtype": "$TEST_TRANSPORT", 00:25:26.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.631 "adrfam": "ipv4", 00:25:26.631 "trsvcid": "$NVMF_PORT", 00:25:26.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.631 "hdgst": ${hdgst:-false}, 00:25:26.631 "ddgst": ${ddgst:-false} 00:25:26.631 }, 00:25:26.631 "method": "bdev_nvme_attach_controller" 00:25:26.631 } 00:25:26.631 EOF 00:25:26.631 )") 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:26.631 { 00:25:26.631 "params": { 00:25:26.631 "name": "Nvme$subsystem", 00:25:26.631 "trtype": "$TEST_TRANSPORT", 00:25:26.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.631 "adrfam": "ipv4", 00:25:26.631 "trsvcid": "$NVMF_PORT", 00:25:26.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.631 "hdgst": ${hdgst:-false}, 00:25:26.631 "ddgst": ${ddgst:-false} 00:25:26.631 }, 00:25:26.631 "method": "bdev_nvme_attach_controller" 00:25:26.631 } 00:25:26.631 EOF 00:25:26.631 )") 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:26.631 { 00:25:26.631 "params": { 00:25:26.631 "name": "Nvme$subsystem", 00:25:26.631 "trtype": "$TEST_TRANSPORT", 00:25:26.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:26.631 "adrfam": "ipv4", 00:25:26.631 "trsvcid": "$NVMF_PORT", 00:25:26.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:26.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:26.631 "hdgst": ${hdgst:-false}, 00:25:26.631 "ddgst": ${ddgst:-false} 00:25:26.631 }, 00:25:26.631 "method": "bdev_nvme_attach_controller" 00:25:26.631 } 00:25:26.631 EOF 00:25:26.631 )") 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:26.631 "params": { 00:25:26.631 "name": "Nvme0", 00:25:26.631 "trtype": "tcp", 00:25:26.631 "traddr": "10.0.0.2", 00:25:26.631 "adrfam": "ipv4", 00:25:26.631 "trsvcid": "4420", 00:25:26.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:26.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:26.631 "hdgst": false, 00:25:26.631 "ddgst": false 00:25:26.631 }, 00:25:26.631 "method": "bdev_nvme_attach_controller" 00:25:26.631 },{ 00:25:26.631 "params": { 00:25:26.631 "name": "Nvme1", 00:25:26.631 "trtype": "tcp", 00:25:26.631 "traddr": "10.0.0.2", 00:25:26.631 "adrfam": "ipv4", 00:25:26.631 "trsvcid": "4420", 00:25:26.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:26.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:26.631 "hdgst": false, 00:25:26.631 "ddgst": false 00:25:26.631 }, 00:25:26.631 "method": "bdev_nvme_attach_controller" 00:25:26.631 },{ 00:25:26.631 "params": { 00:25:26.631 "name": "Nvme2", 00:25:26.631 "trtype": "tcp", 00:25:26.631 "traddr": "10.0.0.2", 00:25:26.631 "adrfam": "ipv4", 00:25:26.631 "trsvcid": "4420", 00:25:26.631 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:26.631 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:26.631 "hdgst": false, 00:25:26.631 "ddgst": false 00:25:26.631 }, 00:25:26.631 "method": "bdev_nvme_attach_controller" 00:25:26.631 }' 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:26.631 11:08:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:26.631 11:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:26.631 11:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:26.631 11:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:26.631 11:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:26.631 11:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:26.631 11:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:26.631 11:08:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:26.631 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:26.631 ... 00:25:26.631 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:26.631 ... 00:25:26.631 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:26.631 ... 00:25:26.631 fio-3.35 00:25:26.631 Starting 24 threads 00:25:38.875 00:25:38.876 filename0: (groupid=0, jobs=1): err= 0: pid=83669: Thu Dec 5 11:09:03 2024 00:25:38.876 read: IOPS=283, BW=1136KiB/s (1163kB/s)(11.1MiB/10052msec) 00:25:38.876 slat (usec): min=6, max=4022, avg=14.79, stdev=84.34 00:25:38.876 clat (usec): min=1247, max=124046, avg=56191.42, stdev=22844.79 00:25:38.876 lat (usec): min=1255, max=124054, avg=56206.21, stdev=22844.80 00:25:38.876 clat percentiles (usec): 00:25:38.876 | 1.00th=[ 1582], 5.00th=[ 9241], 10.00th=[ 22676], 20.00th=[ 40109], 00:25:38.876 | 30.00th=[ 47449], 40.00th=[ 53740], 50.00th=[ 57410], 60.00th=[ 61080], 00:25:38.876 | 70.00th=[ 67634], 80.00th=[ 74974], 90.00th=[ 84411], 95.00th=[ 92799], 00:25:38.876 | 99.00th=[103285], 99.50th=[108528], 99.90th=[124257], 99.95th=[124257], 00:25:38.876 | 99.99th=[124257] 00:25:38.876 bw ( KiB/s): min= 752, max= 3200, per=4.38%, avg=1135.05, stdev=508.86, samples=20 00:25:38.876 iops : min= 188, max= 800, avg=283.75, stdev=127.21, samples=20 00:25:38.876 lat (msec) : 2=1.19%, 4=3.29%, 10=1.61%, 20=3.05%, 50=25.44% 00:25:38.876 lat (msec) : 100=63.91%, 250=1.51% 00:25:38.876 cpu : usr=42.38%, sys=3.17%, ctx=1228, majf=0, minf=0 00:25:38.876 IO depths : 1=0.3%, 2=1.3%, 4=4.2%, 8=78.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:38.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 complete : 0=0.0%, 4=88.7%, 8=10.4%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 issued rwts: total=2854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.876 filename0: (groupid=0, jobs=1): err= 0: pid=83670: Thu Dec 5 11:09:03 2024 00:25:38.876 read: IOPS=266, BW=1066KiB/s (1092kB/s)(10.4MiB/10029msec) 00:25:38.876 slat (nsec): min=2851, max=42211, avg=13475.36, stdev=5394.72 00:25:38.876 clat (msec): min=14, max=127, avg=59.93, stdev=18.39 00:25:38.876 lat (msec): min=14, max=127, avg=59.94, stdev=18.39 00:25:38.876 clat percentiles (msec): 00:25:38.876 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 45], 00:25:38.876 | 30.00th=[ 54], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 63], 00:25:38.876 | 70.00th=[ 67], 80.00th=[ 75], 90.00th=[ 86], 95.00th=[ 92], 00:25:38.876 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.876 | 99.99th=[ 128] 00:25:38.876 bw ( KiB/s): min= 784, max= 1680, per=4.09%, avg=1062.65, stdev=197.17, samples=20 00:25:38.876 iops : min= 196, max= 420, avg=265.65, stdev=49.29, samples=20 00:25:38.876 lat (msec) : 20=2.39%, 50=22.90%, 100=73.44%, 250=1.27% 00:25:38.876 cpu : usr=40.10%, sys=2.87%, ctx=1412, majf=0, minf=9 00:25:38.876 IO depths : 1=0.1%, 2=0.7%, 4=3.0%, 8=79.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:25:38.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 issued rwts: total=2673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.876 filename0: (groupid=0, jobs=1): err= 0: pid=83671: Thu Dec 5 11:09:03 2024 00:25:38.876 read: IOPS=278, BW=1113KiB/s (1139kB/s)(10.9MiB/10026msec) 00:25:38.876 slat (usec): min=4, max=7016, avg=19.21, stdev=170.53 00:25:38.876 clat (msec): min=9, max=125, avg=57.41, stdev=18.80 00:25:38.876 lat (msec): min=9, max=125, avg=57.43, stdev=18.79 00:25:38.876 clat percentiles (msec): 00:25:38.876 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 41], 00:25:38.876 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:25:38.876 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 91], 00:25:38.876 | 99.00th=[ 103], 99.50th=[ 106], 99.90th=[ 124], 99.95th=[ 124], 00:25:38.876 | 99.99th=[ 127] 00:25:38.876 bw ( KiB/s): min= 792, max= 1928, per=4.28%, avg=1109.30, stdev=247.17, samples=20 00:25:38.876 iops : min= 198, max= 482, avg=277.30, stdev=61.80, samples=20 00:25:38.876 lat (msec) : 10=0.29%, 20=2.29%, 50=32.59%, 100=63.32%, 250=1.51% 00:25:38.876 cpu : usr=43.69%, sys=3.53%, ctx=1164, majf=0, minf=9 00:25:38.876 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:38.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 issued rwts: total=2789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.876 filename0: (groupid=0, jobs=1): err= 0: pid=83672: Thu Dec 5 11:09:03 2024 00:25:38.876 read: IOPS=277, BW=1110KiB/s (1137kB/s)(10.9MiB/10011msec) 00:25:38.876 slat (usec): min=2, max=8037, avg=22.11, stdev=228.42 00:25:38.876 clat (msec): min=15, max=126, avg=57.53, stdev=17.68 00:25:38.876 lat (msec): min=15, max=127, avg=57.55, stdev=17.68 00:25:38.876 clat percentiles (msec): 00:25:38.876 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 43], 00:25:38.876 | 30.00th=[ 47], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 61], 00:25:38.876 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 91], 00:25:38.876 | 99.00th=[ 101], 99.50th=[ 106], 99.90th=[ 128], 99.95th=[ 128], 00:25:38.876 | 99.99th=[ 128] 00:25:38.876 bw ( KiB/s): min= 840, max= 1368, per=4.20%, avg=1090.53, stdev=161.02, samples=19 00:25:38.876 iops : min= 210, max= 342, avg=272.63, stdev=40.26, samples=19 00:25:38.876 lat (msec) : 20=0.54%, 50=36.74%, 100=61.64%, 250=1.08% 00:25:38.876 cpu : usr=35.20%, sys=3.05%, ctx=1037, majf=0, minf=9 00:25:38.876 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:38.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 issued rwts: total=2779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.876 filename0: (groupid=0, jobs=1): err= 0: pid=83673: Thu Dec 5 11:09:03 2024 00:25:38.876 read: IOPS=267, BW=1068KiB/s (1094kB/s)(10.4MiB/10003msec) 00:25:38.876 slat (usec): min=2, max=8039, avg=23.35, stdev=268.36 00:25:38.876 clat (msec): min=7, max=126, avg=59.81, stdev=20.48 00:25:38.876 lat (msec): min=7, max=127, avg=59.83, stdev=20.48 00:25:38.876 clat percentiles (msec): 00:25:38.876 | 1.00th=[ 16], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 44], 00:25:38.876 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:25:38.876 | 70.00th=[ 69], 80.00th=[ 79], 90.00th=[ 89], 95.00th=[ 97], 00:25:38.876 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:25:38.876 | 99.99th=[ 128] 00:25:38.876 bw ( KiB/s): min= 640, max= 1304, per=4.01%, avg=1039.84, stdev=218.49, samples=19 00:25:38.876 iops : min= 160, max= 326, avg=259.95, stdev=54.61, samples=19 00:25:38.876 lat (msec) : 10=0.45%, 20=1.27%, 50=33.13%, 100=60.91%, 250=4.23% 00:25:38.876 cpu : usr=31.14%, sys=2.28%, ctx=918, majf=0, minf=9 00:25:38.876 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.8%, 16=15.1%, 32=0.0%, >=64=0.0% 00:25:38.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 complete : 0=0.0%, 4=88.8%, 8=9.8%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 issued rwts: total=2671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.876 filename0: (groupid=0, jobs=1): err= 0: pid=83674: Thu Dec 5 11:09:03 2024 00:25:38.876 read: IOPS=268, BW=1076KiB/s (1102kB/s)(10.5MiB/10014msec) 00:25:38.876 slat (usec): min=2, max=8006, avg=19.59, stdev=204.86 00:25:38.876 clat (msec): min=17, max=124, avg=59.41, stdev=17.62 00:25:38.876 lat (msec): min=17, max=124, avg=59.43, stdev=17.62 00:25:38.876 clat percentiles (msec): 00:25:38.876 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 45], 00:25:38.876 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:25:38.876 | 70.00th=[ 68], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 93], 00:25:38.876 | 99.00th=[ 101], 99.50th=[ 105], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.876 | 99.99th=[ 125] 00:25:38.876 bw ( KiB/s): min= 816, max= 1320, per=4.13%, avg=1071.25, stdev=162.91, samples=20 00:25:38.876 iops : min= 204, max= 330, avg=267.70, stdev=40.72, samples=20 00:25:38.876 lat (msec) : 20=0.45%, 50=33.49%, 100=64.76%, 250=1.30% 00:25:38.876 cpu : usr=30.53%, sys=2.48%, ctx=937, majf=0, minf=9 00:25:38.876 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.4%, 16=16.5%, 32=0.0%, >=64=0.0% 00:25:38.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 issued rwts: total=2693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.876 filename0: (groupid=0, jobs=1): err= 0: pid=83675: Thu Dec 5 11:09:03 2024 00:25:38.876 read: IOPS=263, BW=1055KiB/s (1081kB/s)(10.3MiB/10032msec) 00:25:38.876 slat (nsec): min=6230, max=42077, avg=14415.76, stdev=4974.70 00:25:38.876 clat (msec): min=14, max=124, avg=60.53, stdev=18.38 00:25:38.876 lat (msec): min=14, max=124, avg=60.54, stdev=18.38 00:25:38.876 clat percentiles (msec): 00:25:38.876 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 46], 00:25:38.876 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 63], 00:25:38.876 | 70.00th=[ 70], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 94], 00:25:38.876 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.876 | 99.99th=[ 126] 00:25:38.876 bw ( KiB/s): min= 784, max= 1627, per=4.06%, avg=1052.40, stdev=201.59, samples=20 00:25:38.876 iops : min= 196, max= 406, avg=263.05, stdev=50.28, samples=20 00:25:38.876 lat (msec) : 20=1.81%, 50=27.99%, 100=69.02%, 250=1.17% 00:25:38.876 cpu : usr=30.64%, sys=2.61%, ctx=919, majf=0, minf=9 00:25:38.876 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.1%, 16=16.8%, 32=0.0%, >=64=0.0% 00:25:38.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.876 issued rwts: total=2647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.876 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.876 filename0: (groupid=0, jobs=1): err= 0: pid=83676: Thu Dec 5 11:09:03 2024 00:25:38.876 read: IOPS=270, BW=1083KiB/s (1109kB/s)(10.6MiB/10031msec) 00:25:38.876 slat (usec): min=2, max=8071, avg=22.73, stdev=242.48 00:25:38.876 clat (msec): min=9, max=124, avg=58.90, stdev=20.39 00:25:38.876 lat (msec): min=9, max=124, avg=58.92, stdev=20.39 00:25:38.876 clat percentiles (msec): 00:25:38.876 | 1.00th=[ 14], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 43], 00:25:38.876 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 62], 00:25:38.876 | 70.00th=[ 68], 80.00th=[ 78], 90.00th=[ 87], 95.00th=[ 94], 00:25:38.877 | 99.00th=[ 110], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.877 | 99.99th=[ 125] 00:25:38.877 bw ( KiB/s): min= 712, max= 2052, per=4.18%, avg=1083.25, stdev=291.87, samples=20 00:25:38.877 iops : min= 178, max= 513, avg=270.80, stdev=72.97, samples=20 00:25:38.877 lat (msec) : 10=0.52%, 20=3.64%, 50=28.08%, 100=65.70%, 250=2.06% 00:25:38.877 cpu : usr=44.00%, sys=3.26%, ctx=1307, majf=0, minf=9 00:25:38.877 IO depths : 1=0.1%, 2=1.6%, 4=6.4%, 8=76.4%, 16=15.5%, 32=0.0%, >=64=0.0% 00:25:38.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 complete : 0=0.0%, 4=89.1%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 issued rwts: total=2717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.877 filename1: (groupid=0, jobs=1): err= 0: pid=83677: Thu Dec 5 11:09:03 2024 00:25:38.877 read: IOPS=274, BW=1099KiB/s (1126kB/s)(10.7MiB/10007msec) 00:25:38.877 slat (usec): min=2, max=4039, avg=18.51, stdev=123.47 00:25:38.877 clat (msec): min=7, max=124, avg=58.14, stdev=18.38 00:25:38.877 lat (msec): min=7, max=124, avg=58.15, stdev=18.38 00:25:38.877 clat percentiles (msec): 00:25:38.877 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 42], 00:25:38.877 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:25:38.877 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 92], 00:25:38.877 | 99.00th=[ 102], 99.50th=[ 104], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.877 | 99.99th=[ 125] 00:25:38.877 bw ( KiB/s): min= 824, max= 1608, per=4.22%, avg=1095.95, stdev=200.15, samples=20 00:25:38.877 iops : min= 206, max= 402, avg=273.95, stdev=50.00, samples=20 00:25:38.877 lat (msec) : 10=0.47%, 20=1.35%, 50=32.18%, 100=64.87%, 250=1.13% 00:25:38.877 cpu : usr=42.55%, sys=3.13%, ctx=1431, majf=0, minf=9 00:25:38.877 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:25:38.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 issued rwts: total=2750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.877 filename1: (groupid=0, jobs=1): err= 0: pid=83678: Thu Dec 5 11:09:03 2024 00:25:38.877 read: IOPS=274, BW=1096KiB/s (1123kB/s)(10.8MiB/10041msec) 00:25:38.877 slat (usec): min=6, max=8036, avg=19.35, stdev=216.22 00:25:38.877 clat (usec): min=1412, max=153216, avg=58215.71, stdev=21134.61 00:25:38.877 lat (usec): min=1422, max=153223, avg=58235.06, stdev=21133.90 00:25:38.877 clat percentiles (msec): 00:25:38.877 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 35], 20.00th=[ 45], 00:25:38.877 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 62], 00:25:38.877 | 70.00th=[ 68], 80.00th=[ 75], 90.00th=[ 84], 95.00th=[ 93], 00:25:38.877 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 129], 99.95th=[ 129], 00:25:38.877 | 99.99th=[ 155] 00:25:38.877 bw ( KiB/s): min= 744, max= 2549, per=4.23%, avg=1096.10, stdev=375.91, samples=20 00:25:38.877 iops : min= 186, max= 637, avg=274.00, stdev=93.93, samples=20 00:25:38.877 lat (msec) : 2=0.51%, 4=1.24%, 10=2.25%, 20=2.47%, 50=24.89% 00:25:38.877 lat (msec) : 100=67.33%, 250=1.31% 00:25:38.877 cpu : usr=36.03%, sys=3.04%, ctx=1073, majf=0, minf=0 00:25:38.877 IO depths : 1=0.1%, 2=0.9%, 4=3.0%, 8=79.5%, 16=16.5%, 32=0.0%, >=64=0.0% 00:25:38.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 complete : 0=0.0%, 4=88.6%, 8=10.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.877 filename1: (groupid=0, jobs=1): err= 0: pid=83679: Thu Dec 5 11:09:03 2024 00:25:38.877 read: IOPS=272, BW=1091KiB/s (1117kB/s)(10.7MiB/10012msec) 00:25:38.877 slat (usec): min=2, max=8043, avg=31.50, stdev=375.17 00:25:38.877 clat (msec): min=13, max=124, avg=58.54, stdev=18.27 00:25:38.877 lat (msec): min=13, max=124, avg=58.58, stdev=18.27 00:25:38.877 clat percentiles (msec): 00:25:38.877 | 1.00th=[ 19], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 44], 00:25:38.877 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:25:38.877 | 70.00th=[ 66], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 93], 00:25:38.877 | 99.00th=[ 102], 99.50th=[ 115], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.877 | 99.99th=[ 125] 00:25:38.877 bw ( KiB/s): min= 816, max= 1448, per=4.19%, avg=1087.40, stdev=188.10, samples=20 00:25:38.877 iops : min= 204, max= 362, avg=271.80, stdev=46.95, samples=20 00:25:38.877 lat (msec) : 20=1.06%, 50=32.34%, 100=65.49%, 250=1.10% 00:25:38.877 cpu : usr=30.51%, sys=2.61%, ctx=953, majf=0, minf=9 00:25:38.877 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:38.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 issued rwts: total=2730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.877 filename1: (groupid=0, jobs=1): err= 0: pid=83680: Thu Dec 5 11:09:03 2024 00:25:38.877 read: IOPS=267, BW=1071KiB/s (1097kB/s)(10.5MiB/10017msec) 00:25:38.877 slat (usec): min=3, max=8044, avg=20.69, stdev=199.93 00:25:38.877 clat (msec): min=9, max=127, avg=59.62, stdev=19.54 00:25:38.877 lat (msec): min=9, max=127, avg=59.64, stdev=19.54 00:25:38.877 clat percentiles (msec): 00:25:38.877 | 1.00th=[ 20], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 42], 00:25:38.877 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 62], 00:25:38.877 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 89], 95.00th=[ 95], 00:25:38.877 | 99.00th=[ 105], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 128], 00:25:38.877 | 99.99th=[ 128] 00:25:38.877 bw ( KiB/s): min= 657, max= 1384, per=4.11%, avg=1067.05, stdev=216.58, samples=20 00:25:38.877 iops : min= 164, max= 346, avg=266.60, stdev=54.15, samples=20 00:25:38.877 lat (msec) : 10=0.07%, 20=1.27%, 50=31.76%, 100=65.11%, 250=1.79% 00:25:38.877 cpu : usr=41.05%, sys=3.20%, ctx=1200, majf=0, minf=9 00:25:38.877 IO depths : 1=0.1%, 2=1.6%, 4=6.1%, 8=77.2%, 16=15.1%, 32=0.0%, >=64=0.0% 00:25:38.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.877 filename1: (groupid=0, jobs=1): err= 0: pid=83681: Thu Dec 5 11:09:03 2024 00:25:38.877 read: IOPS=267, BW=1072KiB/s (1097kB/s)(10.5MiB/10014msec) 00:25:38.877 slat (usec): min=4, max=10033, avg=22.24, stdev=227.45 00:25:38.877 clat (msec): min=15, max=126, avg=59.59, stdev=19.11 00:25:38.877 lat (msec): min=15, max=126, avg=59.61, stdev=19.11 00:25:38.877 clat percentiles (msec): 00:25:38.877 | 1.00th=[ 29], 5.00th=[ 35], 10.00th=[ 37], 20.00th=[ 42], 00:25:38.877 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:25:38.877 | 70.00th=[ 67], 80.00th=[ 78], 90.00th=[ 87], 95.00th=[ 93], 00:25:38.877 | 99.00th=[ 106], 99.50th=[ 126], 99.90th=[ 128], 99.95th=[ 128], 00:25:38.877 | 99.99th=[ 128] 00:25:38.877 bw ( KiB/s): min= 656, max= 1341, per=4.12%, avg=1068.65, stdev=217.80, samples=20 00:25:38.877 iops : min= 164, max= 335, avg=267.15, stdev=54.43, samples=20 00:25:38.877 lat (msec) : 20=0.48%, 50=32.35%, 100=64.11%, 250=3.06% 00:25:38.877 cpu : usr=40.63%, sys=3.33%, ctx=1231, majf=0, minf=9 00:25:38.877 IO depths : 1=0.1%, 2=1.6%, 4=5.9%, 8=77.3%, 16=15.0%, 32=0.0%, >=64=0.0% 00:25:38.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.877 filename1: (groupid=0, jobs=1): err= 0: pid=83682: Thu Dec 5 11:09:03 2024 00:25:38.877 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10036msec) 00:25:38.877 slat (usec): min=2, max=8032, avg=24.14, stdev=248.21 00:25:38.877 clat (msec): min=8, max=123, avg=58.02, stdev=19.01 00:25:38.877 lat (msec): min=8, max=123, avg=58.04, stdev=19.02 00:25:38.877 clat percentiles (msec): 00:25:38.877 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 37], 20.00th=[ 42], 00:25:38.877 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 62], 00:25:38.877 | 70.00th=[ 66], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 91], 00:25:38.877 | 99.00th=[ 103], 99.50th=[ 106], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.877 | 99.99th=[ 125] 00:25:38.877 bw ( KiB/s): min= 792, max= 1904, per=4.23%, avg=1097.85, stdev=244.67, samples=20 00:25:38.877 iops : min= 198, max= 476, avg=274.45, stdev=61.17, samples=20 00:25:38.877 lat (msec) : 10=0.07%, 20=3.40%, 50=28.72%, 100=66.35%, 250=1.45% 00:25:38.877 cpu : usr=40.08%, sys=3.25%, ctx=1650, majf=0, minf=9 00:25:38.877 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:38.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 issued rwts: total=2761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.877 filename1: (groupid=0, jobs=1): err= 0: pid=83683: Thu Dec 5 11:09:03 2024 00:25:38.877 read: IOPS=264, BW=1058KiB/s (1083kB/s)(10.4MiB/10033msec) 00:25:38.877 slat (usec): min=4, max=8018, avg=16.99, stdev=155.48 00:25:38.877 clat (msec): min=8, max=125, avg=60.37, stdev=19.35 00:25:38.877 lat (msec): min=8, max=125, avg=60.39, stdev=19.35 00:25:38.877 clat percentiles (msec): 00:25:38.877 | 1.00th=[ 10], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 46], 00:25:38.877 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:25:38.877 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 87], 95.00th=[ 94], 00:25:38.877 | 99.00th=[ 102], 99.50th=[ 108], 99.90th=[ 126], 99.95th=[ 127], 00:25:38.877 | 99.99th=[ 127] 00:25:38.877 bw ( KiB/s): min= 760, max= 1832, per=4.08%, avg=1057.45, stdev=239.94, samples=20 00:25:38.877 iops : min= 190, max= 458, avg=264.35, stdev=59.99, samples=20 00:25:38.877 lat (msec) : 10=1.21%, 20=1.81%, 50=25.86%, 100=69.92%, 250=1.21% 00:25:38.877 cpu : usr=30.73%, sys=2.50%, ctx=928, majf=0, minf=9 00:25:38.877 IO depths : 1=0.2%, 2=0.6%, 4=1.9%, 8=80.6%, 16=16.8%, 32=0.0%, >=64=0.0% 00:25:38.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 complete : 0=0.0%, 4=88.4%, 8=11.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.877 issued rwts: total=2653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.877 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.877 filename1: (groupid=0, jobs=1): err= 0: pid=83684: Thu Dec 5 11:09:03 2024 00:25:38.877 read: IOPS=276, BW=1107KiB/s (1134kB/s)(10.8MiB/10013msec) 00:25:38.878 slat (usec): min=4, max=4044, avg=18.63, stdev=132.36 00:25:38.878 clat (msec): min=13, max=129, avg=57.72, stdev=18.30 00:25:38.878 lat (msec): min=13, max=129, avg=57.73, stdev=18.30 00:25:38.878 clat percentiles (msec): 00:25:38.878 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 42], 00:25:38.878 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 57], 60.00th=[ 61], 00:25:38.878 | 70.00th=[ 65], 80.00th=[ 73], 90.00th=[ 85], 95.00th=[ 92], 00:25:38.878 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 130], 99.95th=[ 130], 00:25:38.878 | 99.99th=[ 130] 00:25:38.878 bw ( KiB/s): min= 784, max= 1707, per=4.26%, avg=1104.05, stdev=226.42, samples=20 00:25:38.878 iops : min= 196, max= 426, avg=275.95, stdev=56.49, samples=20 00:25:38.878 lat (msec) : 20=1.26%, 50=33.27%, 100=64.16%, 250=1.30% 00:25:38.878 cpu : usr=41.19%, sys=3.30%, ctx=1381, majf=0, minf=9 00:25:38.878 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:38.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 issued rwts: total=2771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.878 filename2: (groupid=0, jobs=1): err= 0: pid=83685: Thu Dec 5 11:09:03 2024 00:25:38.878 read: IOPS=269, BW=1078KiB/s (1104kB/s)(10.5MiB/10006msec) 00:25:38.878 slat (usec): min=2, max=8049, avg=39.78, stdev=442.64 00:25:38.878 clat (msec): min=7, max=127, avg=59.17, stdev=19.82 00:25:38.878 lat (msec): min=7, max=127, avg=59.21, stdev=19.81 00:25:38.878 clat percentiles (msec): 00:25:38.878 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 43], 00:25:38.878 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 61], 00:25:38.878 | 70.00th=[ 68], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 95], 00:25:38.878 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 127], 99.95th=[ 128], 00:25:38.878 | 99.99th=[ 128] 00:25:38.878 bw ( KiB/s): min= 704, max= 1280, per=4.05%, avg=1050.79, stdev=194.22, samples=19 00:25:38.878 iops : min= 176, max= 320, avg=262.68, stdev=48.55, samples=19 00:25:38.878 lat (msec) : 10=0.59%, 20=1.67%, 50=32.55%, 100=62.48%, 250=2.71% 00:25:38.878 cpu : usr=30.50%, sys=2.93%, ctx=927, majf=0, minf=9 00:25:38.878 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:25:38.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.878 filename2: (groupid=0, jobs=1): err= 0: pid=83686: Thu Dec 5 11:09:03 2024 00:25:38.878 read: IOPS=262, BW=1050KiB/s (1076kB/s)(10.3MiB/10034msec) 00:25:38.878 slat (nsec): min=5493, max=82469, avg=13168.93, stdev=5305.82 00:25:38.878 clat (msec): min=8, max=125, avg=60.81, stdev=19.51 00:25:38.878 lat (msec): min=8, max=125, avg=60.82, stdev=19.51 00:25:38.878 clat percentiles (msec): 00:25:38.878 | 1.00th=[ 10], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 47], 00:25:38.878 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 63], 00:25:38.878 | 70.00th=[ 70], 80.00th=[ 79], 90.00th=[ 88], 95.00th=[ 93], 00:25:38.878 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.878 | 99.99th=[ 126] 00:25:38.878 bw ( KiB/s): min= 744, max= 1936, per=4.04%, avg=1047.45, stdev=254.90, samples=20 00:25:38.878 iops : min= 186, max= 484, avg=261.85, stdev=63.73, samples=20 00:25:38.878 lat (msec) : 10=1.37%, 20=2.28%, 50=23.30%, 100=71.20%, 250=1.86% 00:25:38.878 cpu : usr=30.91%, sys=2.18%, ctx=951, majf=0, minf=9 00:25:38.878 IO depths : 1=0.2%, 2=1.0%, 4=3.7%, 8=78.6%, 16=16.5%, 32=0.0%, >=64=0.0% 00:25:38.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 complete : 0=0.0%, 4=88.9%, 8=10.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 issued rwts: total=2635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.878 filename2: (groupid=0, jobs=1): err= 0: pid=83687: Thu Dec 5 11:09:03 2024 00:25:38.878 read: IOPS=261, BW=1047KiB/s (1072kB/s)(10.2MiB/10019msec) 00:25:38.878 slat (usec): min=4, max=8030, avg=16.85, stdev=156.62 00:25:38.878 clat (msec): min=18, max=124, avg=61.00, stdev=17.91 00:25:38.878 lat (msec): min=18, max=124, avg=61.02, stdev=17.92 00:25:38.878 clat percentiles (msec): 00:25:38.878 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 47], 00:25:38.878 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 63], 00:25:38.878 | 70.00th=[ 69], 80.00th=[ 77], 90.00th=[ 86], 95.00th=[ 93], 00:25:38.878 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.878 | 99.99th=[ 125] 00:25:38.878 bw ( KiB/s): min= 760, max= 1507, per=4.03%, avg=1044.40, stdev=183.85, samples=20 00:25:38.878 iops : min= 190, max= 376, avg=261.05, stdev=45.87, samples=20 00:25:38.878 lat (msec) : 20=0.61%, 50=28.56%, 100=69.20%, 250=1.64% 00:25:38.878 cpu : usr=30.90%, sys=2.27%, ctx=914, majf=0, minf=9 00:25:38.878 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.2%, 16=16.9%, 32=0.0%, >=64=0.0% 00:25:38.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 issued rwts: total=2623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.878 filename2: (groupid=0, jobs=1): err= 0: pid=83688: Thu Dec 5 11:09:03 2024 00:25:38.878 read: IOPS=268, BW=1073KiB/s (1099kB/s)(10.5MiB/10046msec) 00:25:38.878 slat (usec): min=6, max=8021, avg=19.00, stdev=186.34 00:25:38.878 clat (msec): min=2, max=127, avg=59.48, stdev=23.59 00:25:38.878 lat (msec): min=2, max=127, avg=59.50, stdev=23.58 00:25:38.878 clat percentiles (msec): 00:25:38.878 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 27], 20.00th=[ 43], 00:25:38.878 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 65], 00:25:38.878 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 95], 00:25:38.878 | 99.00th=[ 115], 99.50th=[ 123], 99.90th=[ 126], 99.95th=[ 128], 00:25:38.878 | 99.99th=[ 128] 00:25:38.878 bw ( KiB/s): min= 712, max= 2810, per=4.13%, avg=1071.15, stdev=444.21, samples=20 00:25:38.878 iops : min= 178, max= 702, avg=267.75, stdev=110.95, samples=20 00:25:38.878 lat (msec) : 4=2.37%, 10=2.30%, 20=3.56%, 50=21.34%, 100=67.05% 00:25:38.878 lat (msec) : 250=3.38% 00:25:38.878 cpu : usr=37.69%, sys=3.19%, ctx=1226, majf=0, minf=0 00:25:38.878 IO depths : 1=0.3%, 2=1.6%, 4=5.4%, 8=76.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:25:38.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 complete : 0=0.0%, 4=89.4%, 8=9.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 issued rwts: total=2695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.878 filename2: (groupid=0, jobs=1): err= 0: pid=83689: Thu Dec 5 11:09:03 2024 00:25:38.878 read: IOPS=280, BW=1121KiB/s (1148kB/s)(11.0MiB/10013msec) 00:25:38.878 slat (usec): min=2, max=7023, avg=26.88, stdev=269.84 00:25:38.878 clat (msec): min=8, max=124, avg=56.95, stdev=17.89 00:25:38.878 lat (msec): min=8, max=124, avg=56.98, stdev=17.89 00:25:38.878 clat percentiles (msec): 00:25:38.878 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 42], 00:25:38.878 | 30.00th=[ 45], 40.00th=[ 52], 50.00th=[ 57], 60.00th=[ 60], 00:25:38.878 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 83], 95.00th=[ 91], 00:25:38.878 | 99.00th=[ 101], 99.50th=[ 105], 99.90th=[ 125], 99.95th=[ 125], 00:25:38.878 | 99.99th=[ 125] 00:25:38.878 bw ( KiB/s): min= 841, max= 1442, per=4.30%, avg=1116.95, stdev=182.08, samples=20 00:25:38.878 iops : min= 210, max= 360, avg=279.10, stdev=45.48, samples=20 00:25:38.878 lat (msec) : 10=0.11%, 20=0.53%, 50=36.80%, 100=61.35%, 250=1.21% 00:25:38.878 cpu : usr=39.54%, sys=3.18%, ctx=1389, majf=0, minf=9 00:25:38.878 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:25:38.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.878 issued rwts: total=2807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.878 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.878 filename2: (groupid=0, jobs=1): err= 0: pid=83690: Thu Dec 5 11:09:03 2024 00:25:38.878 read: IOPS=275, BW=1104KiB/s (1130kB/s)(10.8MiB/10019msec) 00:25:38.878 slat (usec): min=6, max=10034, avg=24.33, stdev=287.55 00:25:38.878 clat (msec): min=14, max=127, avg=57.89, stdev=18.53 00:25:38.878 lat (msec): min=14, max=127, avg=57.92, stdev=18.53 00:25:38.878 clat percentiles (msec): 00:25:38.878 | 1.00th=[ 21], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 43], 00:25:38.878 | 30.00th=[ 47], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 61], 00:25:38.878 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 92], 00:25:38.878 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 128], 99.95th=[ 128], 00:25:38.878 | 99.99th=[ 128] 00:25:38.878 bw ( KiB/s): min= 816, max= 1744, per=4.24%, avg=1100.30, stdev=215.84, samples=20 00:25:38.879 iops : min= 204, max= 436, avg=275.05, stdev=53.95, samples=20 00:25:38.879 lat (msec) : 20=0.98%, 50=35.89%, 100=61.87%, 250=1.27% 00:25:38.879 cpu : usr=30.76%, sys=2.47%, ctx=898, majf=0, minf=9 00:25:38.879 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:25:38.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.879 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.879 issued rwts: total=2764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.879 filename2: (groupid=0, jobs=1): err= 0: pid=83691: Thu Dec 5 11:09:03 2024 00:25:38.879 read: IOPS=264, BW=1057KiB/s (1082kB/s)(10.3MiB/10006msec) 00:25:38.879 slat (usec): min=2, max=8020, avg=30.82, stdev=334.44 00:25:38.879 clat (msec): min=6, max=138, avg=60.44, stdev=21.68 00:25:38.879 lat (msec): min=6, max=138, avg=60.47, stdev=21.68 00:25:38.879 clat percentiles (msec): 00:25:38.879 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 42], 00:25:38.879 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 62], 00:25:38.879 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 93], 95.00th=[ 103], 00:25:38.879 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 134], 99.95th=[ 138], 00:25:38.879 | 99.99th=[ 138] 00:25:38.879 bw ( KiB/s): min= 624, max= 1392, per=3.96%, avg=1028.05, stdev=246.41, samples=19 00:25:38.879 iops : min= 156, max= 348, avg=257.00, stdev=61.59, samples=19 00:25:38.879 lat (msec) : 10=0.23%, 20=1.02%, 50=33.11%, 100=60.05%, 250=5.60% 00:25:38.879 cpu : usr=39.20%, sys=3.07%, ctx=1584, majf=0, minf=9 00:25:38.879 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:25:38.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.879 complete : 0=0.0%, 4=88.9%, 8=9.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.879 issued rwts: total=2643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.879 filename2: (groupid=0, jobs=1): err= 0: pid=83692: Thu Dec 5 11:09:03 2024 00:25:38.879 read: IOPS=270, BW=1081KiB/s (1107kB/s)(10.6MiB/10012msec) 00:25:38.879 slat (usec): min=2, max=8044, avg=34.52, stdev=351.24 00:25:38.879 clat (msec): min=8, max=129, avg=59.03, stdev=19.60 00:25:38.879 lat (msec): min=8, max=129, avg=59.06, stdev=19.60 00:25:38.879 clat percentiles (msec): 00:25:38.879 | 1.00th=[ 18], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 42], 00:25:38.879 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 61], 00:25:38.879 | 70.00th=[ 67], 80.00th=[ 74], 90.00th=[ 87], 95.00th=[ 94], 00:25:38.879 | 99.00th=[ 109], 99.50th=[ 118], 99.90th=[ 124], 99.95th=[ 130], 00:25:38.879 | 99.99th=[ 130] 00:25:38.879 bw ( KiB/s): min= 704, max= 1448, per=4.15%, avg=1077.65, stdev=211.48, samples=20 00:25:38.879 iops : min= 176, max= 362, avg=269.40, stdev=52.86, samples=20 00:25:38.879 lat (msec) : 10=0.11%, 20=1.88%, 50=33.74%, 100=62.05%, 250=2.22% 00:25:38.879 cpu : usr=33.88%, sys=2.55%, ctx=1127, majf=0, minf=9 00:25:38.879 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:25:38.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.879 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.879 issued rwts: total=2706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.879 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:38.879 00:25:38.879 Run status group 0 (all jobs): 00:25:38.879 READ: bw=25.3MiB/s (26.6MB/s), 1047KiB/s-1136KiB/s (1072kB/s-1163kB/s), io=255MiB (267MB), run=10003-10052msec 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 bdev_null0 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.879 [2024-12-05 11:09:04.306746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:38.879 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.880 bdev_null1 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:38.880 { 00:25:38.880 "params": { 00:25:38.880 "name": "Nvme$subsystem", 00:25:38.880 "trtype": "$TEST_TRANSPORT", 00:25:38.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.880 "adrfam": "ipv4", 00:25:38.880 "trsvcid": "$NVMF_PORT", 00:25:38.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.880 "hdgst": ${hdgst:-false}, 00:25:38.880 "ddgst": ${ddgst:-false} 00:25:38.880 }, 00:25:38.880 "method": "bdev_nvme_attach_controller" 00:25:38.880 } 00:25:38.880 EOF 00:25:38.880 )") 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:38.880 { 00:25:38.880 "params": { 00:25:38.880 "name": "Nvme$subsystem", 00:25:38.880 "trtype": "$TEST_TRANSPORT", 00:25:38.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:38.880 "adrfam": "ipv4", 00:25:38.880 "trsvcid": "$NVMF_PORT", 00:25:38.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:38.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:38.880 "hdgst": ${hdgst:-false}, 00:25:38.880 "ddgst": ${ddgst:-false} 00:25:38.880 }, 00:25:38.880 "method": "bdev_nvme_attach_controller" 00:25:38.880 } 00:25:38.880 EOF 00:25:38.880 )") 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:38.880 "params": { 00:25:38.880 "name": "Nvme0", 00:25:38.880 "trtype": "tcp", 00:25:38.880 "traddr": "10.0.0.2", 00:25:38.880 "adrfam": "ipv4", 00:25:38.880 "trsvcid": "4420", 00:25:38.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:38.880 "hdgst": false, 00:25:38.880 "ddgst": false 00:25:38.880 }, 00:25:38.880 "method": "bdev_nvme_attach_controller" 00:25:38.880 },{ 00:25:38.880 "params": { 00:25:38.880 "name": "Nvme1", 00:25:38.880 "trtype": "tcp", 00:25:38.880 "traddr": "10.0.0.2", 00:25:38.880 "adrfam": "ipv4", 00:25:38.880 "trsvcid": "4420", 00:25:38.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:38.880 "hdgst": false, 00:25:38.880 "ddgst": false 00:25:38.880 }, 00:25:38.880 "method": "bdev_nvme_attach_controller" 00:25:38.880 }' 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:38.880 11:09:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:38.880 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:38.880 ... 00:25:38.880 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:38.880 ... 00:25:38.880 fio-3.35 00:25:38.880 Starting 4 threads 00:25:43.070 00:25:43.070 filename0: (groupid=0, jobs=1): err= 0: pid=83835: Thu Dec 5 11:09:10 2024 00:25:43.070 read: IOPS=2303, BW=18.0MiB/s (18.9MB/s)(90.0MiB/5001msec) 00:25:43.070 slat (nsec): min=5899, max=84596, avg=12938.51, stdev=2976.67 00:25:43.070 clat (usec): min=986, max=5286, avg=3423.66, stdev=359.94 00:25:43.070 lat (usec): min=999, max=5308, avg=3436.60, stdev=359.52 00:25:43.070 clat percentiles (usec): 00:25:43.070 | 1.00th=[ 1598], 5.00th=[ 2900], 10.00th=[ 3163], 20.00th=[ 3425], 00:25:43.070 | 30.00th=[ 3458], 40.00th=[ 3490], 50.00th=[ 3490], 60.00th=[ 3523], 00:25:43.070 | 70.00th=[ 3523], 80.00th=[ 3556], 90.00th=[ 3654], 95.00th=[ 3752], 00:25:43.070 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4621], 99.95th=[ 4686], 00:25:43.070 | 99.99th=[ 5145] 00:25:43.070 bw ( KiB/s): min=17920, max=20976, per=21.62%, avg=18478.22, stdev=989.66, samples=9 00:25:43.070 iops : min= 2240, max= 2622, avg=2309.78, stdev=123.71, samples=9 00:25:43.070 lat (usec) : 1000=0.01% 00:25:43.070 lat (msec) : 2=1.62%, 4=97.34%, 10=1.03% 00:25:43.070 cpu : usr=90.08%, sys=9.32%, ctx=5, majf=0, minf=0 00:25:43.070 IO depths : 1=0.1%, 2=23.1%, 4=51.1%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.070 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.070 issued rwts: total=11522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.070 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:43.070 filename0: (groupid=0, jobs=1): err= 0: pid=83836: Thu Dec 5 11:09:10 2024 00:25:43.070 read: IOPS=3032, BW=23.7MiB/s (24.8MB/s)(118MiB/5001msec) 00:25:43.070 slat (usec): min=5, max=269, avg=10.46, stdev= 4.67 00:25:43.070 clat (usec): min=202, max=5888, avg=2612.56, stdev=799.18 00:25:43.070 lat (usec): min=210, max=5925, avg=2623.01, stdev=798.32 00:25:43.070 clat percentiles (usec): 00:25:43.070 | 1.00th=[ 1057], 5.00th=[ 1614], 10.00th=[ 1680], 20.00th=[ 1713], 00:25:43.070 | 30.00th=[ 1942], 40.00th=[ 2114], 50.00th=[ 2474], 60.00th=[ 3163], 00:25:43.070 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3490], 95.00th=[ 3589], 00:25:43.070 | 99.00th=[ 3818], 99.50th=[ 3916], 99.90th=[ 4686], 99.95th=[ 5735], 00:25:43.070 | 99.99th=[ 5866] 00:25:43.070 bw ( KiB/s): min=22208, max=24944, per=28.30%, avg=24192.00, stdev=915.29, samples=9 00:25:43.070 iops : min= 2776, max= 3118, avg=3024.00, stdev=114.41, samples=9 00:25:43.070 lat (usec) : 250=0.01%, 750=0.01%, 1000=0.26% 00:25:43.070 lat (msec) : 2=36.58%, 4=62.70%, 10=0.44% 00:25:43.070 cpu : usr=89.64%, sys=9.34%, ctx=53, majf=0, minf=0 00:25:43.070 IO depths : 1=0.1%, 2=1.6%, 4=62.8%, 8=35.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.070 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.070 issued rwts: total=15164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.070 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:43.070 filename1: (groupid=0, jobs=1): err= 0: pid=83837: Thu Dec 5 11:09:10 2024 00:25:43.070 read: IOPS=2909, BW=22.7MiB/s (23.8MB/s)(114MiB/5002msec) 00:25:43.070 slat (nsec): min=5877, max=73291, avg=10553.81, stdev=3275.95 00:25:43.070 clat (usec): min=921, max=5467, avg=2722.00, stdev=775.22 00:25:43.070 lat (usec): min=933, max=5480, avg=2732.56, stdev=775.52 00:25:43.070 clat percentiles (usec): 00:25:43.070 | 1.00th=[ 1516], 5.00th=[ 1680], 10.00th=[ 1696], 20.00th=[ 1778], 00:25:43.070 | 30.00th=[ 1942], 40.00th=[ 2311], 50.00th=[ 3130], 60.00th=[ 3359], 00:25:43.070 | 70.00th=[ 3392], 80.00th=[ 3458], 90.00th=[ 3523], 95.00th=[ 3589], 00:25:43.070 | 99.00th=[ 3752], 99.50th=[ 3785], 99.90th=[ 4015], 99.95th=[ 4047], 00:25:43.070 | 99.99th=[ 4359] 00:25:43.070 bw ( KiB/s): min=18048, max=24944, per=27.23%, avg=23273.60, stdev=2266.04, samples=10 00:25:43.070 iops : min= 2256, max= 3118, avg=2909.20, stdev=283.25, samples=10 00:25:43.070 lat (usec) : 1000=0.23% 00:25:43.070 lat (msec) : 2=33.28%, 4=66.38%, 10=0.11% 00:25:43.070 cpu : usr=90.22%, sys=9.08%, ctx=6, majf=0, minf=0 00:25:43.070 IO depths : 1=0.1%, 2=4.8%, 4=61.1%, 8=34.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.070 complete : 0=0.0%, 4=98.2%, 8=1.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.070 issued rwts: total=14551,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.070 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:43.070 filename1: (groupid=0, jobs=1): err= 0: pid=83838: Thu Dec 5 11:09:10 2024 00:25:43.070 read: IOPS=2441, BW=19.1MiB/s (20.0MB/s)(95.4MiB/5002msec) 00:25:43.070 slat (nsec): min=5901, max=35663, avg=13172.35, stdev=2838.39 00:25:43.070 clat (usec): min=1017, max=4487, avg=3229.94, stdev=578.33 00:25:43.070 lat (usec): min=1030, max=4499, avg=3243.12, stdev=578.21 00:25:43.070 clat percentiles (usec): 00:25:43.070 | 1.00th=[ 1500], 5.00th=[ 1713], 10.00th=[ 2114], 20.00th=[ 3032], 00:25:43.070 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3458], 60.00th=[ 3490], 00:25:43.070 | 70.00th=[ 3523], 80.00th=[ 3523], 90.00th=[ 3589], 95.00th=[ 3654], 00:25:43.070 | 99.00th=[ 3818], 99.50th=[ 4015], 99.90th=[ 4178], 99.95th=[ 4228], 00:25:43.070 | 99.99th=[ 4228] 00:25:43.070 bw ( KiB/s): min=17920, max=24704, per=23.04%, avg=19698.11, stdev=2491.64, samples=9 00:25:43.070 iops : min= 2240, max= 3088, avg=2462.22, stdev=311.41, samples=9 00:25:43.070 lat (msec) : 2=7.73%, 4=91.72%, 10=0.55% 00:25:43.070 cpu : usr=89.92%, sys=9.36%, ctx=6, majf=0, minf=1 00:25:43.070 IO depths : 1=0.1%, 2=18.1%, 4=53.8%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.070 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.070 issued rwts: total=12211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.070 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:43.070 00:25:43.070 Run status group 0 (all jobs): 00:25:43.070 READ: bw=83.5MiB/s (87.5MB/s), 18.0MiB/s-23.7MiB/s (18.9MB/s-24.8MB/s), io=418MiB (438MB), run=5001-5002msec 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:43.331 ************************************ 00:25:43.331 END TEST fio_dif_rand_params 00:25:43.331 ************************************ 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.331 00:25:43.331 real 0m23.690s 00:25:43.331 user 2m1.424s 00:25:43.331 sys 0m11.385s 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:43.331 11:09:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:25:43.591 11:09:10 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:43.591 11:09:10 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:43.591 11:09:10 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:43.591 11:09:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:43.591 ************************************ 00:25:43.591 START TEST fio_dif_digest 00:25:43.591 ************************************ 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:43.591 bdev_null0 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.591 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:43.591 [2024-12-05 11:09:10.566648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:43.592 { 00:25:43.592 "params": { 00:25:43.592 "name": "Nvme$subsystem", 00:25:43.592 "trtype": "$TEST_TRANSPORT", 00:25:43.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.592 "adrfam": "ipv4", 00:25:43.592 "trsvcid": "$NVMF_PORT", 00:25:43.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.592 "hdgst": ${hdgst:-false}, 00:25:43.592 "ddgst": ${ddgst:-false} 00:25:43.592 }, 00:25:43.592 "method": "bdev_nvme_attach_controller" 00:25:43.592 } 00:25:43.592 EOF 00:25:43.592 )") 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:43.592 "params": { 00:25:43.592 "name": "Nvme0", 00:25:43.592 "trtype": "tcp", 00:25:43.592 "traddr": "10.0.0.2", 00:25:43.592 "adrfam": "ipv4", 00:25:43.592 "trsvcid": "4420", 00:25:43.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:43.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:43.592 "hdgst": true, 00:25:43.592 "ddgst": true 00:25:43.592 }, 00:25:43.592 "method": "bdev_nvme_attach_controller" 00:25:43.592 }' 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:43.592 11:09:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.851 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:43.851 ... 00:25:43.851 fio-3.35 00:25:43.851 Starting 3 threads 00:25:56.068 00:25:56.068 filename0: (groupid=0, jobs=1): err= 0: pid=83945: Thu Dec 5 11:09:21 2024 00:25:56.068 read: IOPS=279, BW=35.0MiB/s (36.7MB/s)(350MiB/10008msec) 00:25:56.068 slat (nsec): min=6123, max=35628, avg=9712.79, stdev=4058.95 00:25:56.068 clat (usec): min=3838, max=13958, avg=10701.77, stdev=488.71 00:25:56.068 lat (usec): min=3857, max=13973, avg=10711.48, stdev=488.74 00:25:56.068 clat percentiles (usec): 00:25:56.068 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10421], 00:25:56.068 | 30.00th=[10421], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:25:56.068 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11469], 00:25:56.068 | 99.00th=[11731], 99.50th=[11994], 99.90th=[13960], 99.95th=[13960], 00:25:56.068 | 99.99th=[13960] 00:25:56.068 bw ( KiB/s): min=33792, max=37632, per=33.38%, avg=35813.05, stdev=998.42, samples=19 00:25:56.068 iops : min= 264, max= 294, avg=279.79, stdev= 7.80, samples=19 00:25:56.068 lat (msec) : 4=0.11%, 10=0.21%, 20=99.68% 00:25:56.068 cpu : usr=89.01%, sys=10.55%, ctx=14, majf=0, minf=9 00:25:56.068 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:56.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.068 issued rwts: total=2799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:56.068 filename0: (groupid=0, jobs=1): err= 0: pid=83946: Thu Dec 5 11:09:21 2024 00:25:56.068 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(350MiB/10003msec) 00:25:56.068 slat (nsec): min=6205, max=31096, avg=9182.61, stdev=3538.13 00:25:56.068 clat (usec): min=3783, max=13598, avg=10709.74, stdev=444.44 00:25:56.068 lat (usec): min=3790, max=13613, avg=10718.92, stdev=444.84 00:25:56.068 clat percentiles (usec): 00:25:56.068 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10421], 00:25:56.068 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683], 00:25:56.068 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11469], 00:25:56.068 | 99.00th=[11863], 99.50th=[11994], 99.90th=[13566], 99.95th=[13566], 00:25:56.068 | 99.99th=[13566] 00:25:56.068 bw ( KiB/s): min=33792, max=36864, per=33.34%, avg=35772.63, stdev=934.16, samples=19 00:25:56.068 iops : min= 264, max= 288, avg=279.47, stdev= 7.30, samples=19 00:25:56.068 lat (msec) : 4=0.11%, 10=0.11%, 20=99.79% 00:25:56.068 cpu : usr=88.98%, sys=10.59%, ctx=103, majf=0, minf=0 00:25:56.068 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:56.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.068 issued rwts: total=2796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:56.068 filename0: (groupid=0, jobs=1): err= 0: pid=83947: Thu Dec 5 11:09:21 2024 00:25:56.068 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(349MiB/10001msec) 00:25:56.068 slat (nsec): min=6209, max=42617, avg=9085.44, stdev=3330.77 00:25:56.068 clat (usec): min=8367, max=13881, avg=10719.31, stdev=396.94 00:25:56.068 lat (usec): min=8374, max=13895, avg=10728.39, stdev=397.36 00:25:56.068 clat percentiles (usec): 00:25:56.068 | 1.00th=[10421], 5.00th=[10421], 10.00th=[10421], 20.00th=[10421], 00:25:56.068 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10552], 60.00th=[10683], 00:25:56.068 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11469], 00:25:56.068 | 99.00th=[11731], 99.50th=[11863], 99.90th=[13829], 99.95th=[13829], 00:25:56.068 | 99.99th=[13829] 00:25:56.068 bw ( KiB/s): min=33792, max=36864, per=33.34%, avg=35772.63, stdev=934.16, samples=19 00:25:56.068 iops : min= 264, max= 288, avg=279.47, stdev= 7.30, samples=19 00:25:56.068 lat (msec) : 10=0.11%, 20=99.89% 00:25:56.068 cpu : usr=88.68%, sys=10.90%, ctx=27, majf=0, minf=0 00:25:56.068 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:56.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.068 issued rwts: total=2793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:56.068 00:25:56.068 Run status group 0 (all jobs): 00:25:56.068 READ: bw=105MiB/s (110MB/s), 34.9MiB/s-35.0MiB/s (36.6MB/s-36.7MB/s), io=1049MiB (1099MB), run=10001-10008msec 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.068 00:25:56.068 real 0m11.029s 00:25:56.068 user 0m27.329s 00:25:56.068 sys 0m3.538s 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.068 11:09:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:25:56.069 ************************************ 00:25:56.069 END TEST fio_dif_digest 00:25:56.069 ************************************ 00:25:56.069 11:09:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:25:56.069 11:09:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:56.069 rmmod nvme_tcp 00:25:56.069 rmmod nvme_fabrics 00:25:56.069 rmmod nvme_keyring 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 83180 ']' 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 83180 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83180 ']' 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83180 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83180 00:25:56.069 killing process with pid 83180 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83180' 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83180 00:25:56.069 11:09:21 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83180 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:25:56.069 11:09:21 nvmf_dif -- nvmf/common.sh@340 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:56.069 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:56.069 Waiting for block devices as requested 00:25:56.069 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:56.069 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:56.069 11:09:22 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@254 -- # local dev 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:56.069 11:09:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:25:56.069 11:09:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@261 -- # continue 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@261 -- # continue 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:25:56.069 11:09:22 nvmf_dif -- nvmf/setup.sh@274 -- # iptr 00:25:56.069 11:09:22 nvmf_dif -- nvmf/common.sh@548 -- # iptables-save 00:25:56.069 11:09:22 nvmf_dif -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:56.069 11:09:22 nvmf_dif -- nvmf/common.sh@548 -- # iptables-restore 00:25:56.069 ************************************ 00:25:56.069 END TEST nvmf_dif 00:25:56.069 ************************************ 00:25:56.069 00:25:56.069 real 1m1.086s 00:25:56.069 user 3m45.014s 00:25:56.069 sys 0m25.191s 00:25:56.069 11:09:22 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.069 11:09:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:56.069 11:09:22 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:56.069 11:09:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:56.069 11:09:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.069 11:09:22 -- common/autotest_common.sh@10 -- # set +x 00:25:56.069 ************************************ 00:25:56.069 START TEST nvmf_abort_qd_sizes 00:25:56.069 ************************************ 00:25:56.069 11:09:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:25:56.069 * Looking for test storage... 00:25:56.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.069 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:25:56.329 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.329 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.329 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.329 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:25:56.329 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.329 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:56.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.329 --rc genhtml_branch_coverage=1 00:25:56.329 --rc genhtml_function_coverage=1 00:25:56.329 --rc genhtml_legend=1 00:25:56.329 --rc geninfo_all_blocks=1 00:25:56.329 --rc geninfo_unexecuted_blocks=1 00:25:56.329 00:25:56.329 ' 00:25:56.329 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:56.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.329 --rc genhtml_branch_coverage=1 00:25:56.329 --rc genhtml_function_coverage=1 00:25:56.329 --rc genhtml_legend=1 00:25:56.329 --rc geninfo_all_blocks=1 00:25:56.329 --rc geninfo_unexecuted_blocks=1 00:25:56.329 00:25:56.329 ' 00:25:56.329 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:56.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.330 --rc genhtml_branch_coverage=1 00:25:56.330 --rc genhtml_function_coverage=1 00:25:56.330 --rc genhtml_legend=1 00:25:56.330 --rc geninfo_all_blocks=1 00:25:56.330 --rc geninfo_unexecuted_blocks=1 00:25:56.330 00:25:56.330 ' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:56.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.330 --rc genhtml_branch_coverage=1 00:25:56.330 --rc genhtml_function_coverage=1 00:25:56.330 --rc genhtml_legend=1 00:25:56.330 --rc geninfo_all_blocks=1 00:25:56.330 --rc geninfo_unexecuted_blocks=1 00:25:56.330 00:25:56.330 ' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:56.330 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@223 -- # create_target_ns 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # return 0 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:56.330 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up target0 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:56.331 10.0.0.1 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:56.331 10.0.0.2 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:56.331 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up target1 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772163 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:56.593 10.0.0.3 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772164 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:56.593 10.0.0.4 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:56.593 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:56.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:25:56.594 00:25:56.594 --- 10.0.0.1 ping statistics --- 00:25:56.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.594 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target0 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:56.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:25:56.594 00:25:56.594 --- 10.0.0.2 ping statistics --- 00:25:56.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.594 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:56.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:56.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:25:56.594 00:25:56.594 --- 10.0.0.3 ping statistics --- 00:25:56.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.594 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:56.594 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:56.854 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:56.854 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:25:56.854 00:25:56.854 --- 10.0.0.4 ping statistics --- 00:25:56.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.854 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:56.854 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:56.854 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:56.854 11:09:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.854 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # return 0 00:25:56.854 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:25:56.854 11:09:23 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:57.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:57.682 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:57.682 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator0 00:25:57.682 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target0 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target0 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target1 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:57.941 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=84602 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 84602 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84602 ']' 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.942 11:09:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:57.942 [2024-12-05 11:09:25.016867] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:25:57.942 [2024-12-05 11:09:25.016941] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.200 [2024-12-05 11:09:25.168560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.200 [2024-12-05 11:09:25.218843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.201 [2024-12-05 11:09:25.218897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.201 [2024-12-05 11:09:25.218906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.201 [2024-12-05 11:09:25.218915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.201 [2024-12-05 11:09:25.218922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.201 [2024-12-05 11:09:25.219828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.201 [2024-12-05 11:09:25.219909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:58.201 [2024-12-05 11:09:25.221059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.201 [2024-12-05 11:09:25.221061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:58.201 [2024-12-05 11:09:25.262768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:58.766 11:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.766 11:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:25:58.766 11:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:58.766 11:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.766 11:09:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:25:59.025 11:09:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.025 11:09:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:59.025 ************************************ 00:25:59.025 START TEST spdk_target_abort 00:25:59.025 ************************************ 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:59.025 spdk_targetn1 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:59.025 [2024-12-05 11:09:26.100800] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:59.025 [2024-12-05 11:09:26.151643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:59.025 11:09:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:02.311 Initializing NVMe Controllers 00:26:02.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:02.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:02.311 Initialization complete. Launching workers. 00:26:02.311 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13440, failed: 0 00:26:02.311 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1090, failed to submit 12350 00:26:02.311 success 753, unsuccessful 337, failed 0 00:26:02.311 11:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:02.311 11:09:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:05.626 Initializing NVMe Controllers 00:26:05.626 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:05.626 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:05.626 Initialization complete. Launching workers. 00:26:05.626 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:26:05.626 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1165, failed to submit 7835 00:26:05.626 success 377, unsuccessful 788, failed 0 00:26:05.626 11:09:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:05.626 11:09:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:08.916 Initializing NVMe Controllers 00:26:08.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:08.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:08.916 Initialization complete. Launching workers. 00:26:08.916 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35527, failed: 0 00:26:08.916 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2414, failed to submit 33113 00:26:08.916 success 578, unsuccessful 1836, failed 0 00:26:08.916 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:08.916 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.916 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:08.916 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.916 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:08.916 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.916 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.484 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.484 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84602 00:26:09.484 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84602 ']' 00:26:09.484 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84602 00:26:09.484 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:26:09.484 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.484 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84602 00:26:09.742 killing process with pid 84602 00:26:09.742 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.742 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.742 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84602' 00:26:09.742 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84602 00:26:09.742 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84602 00:26:09.742 ************************************ 00:26:09.742 END TEST spdk_target_abort 00:26:09.742 ************************************ 00:26:09.742 00:26:09.742 real 0m10.809s 00:26:09.742 user 0m43.128s 00:26:09.742 sys 0m3.004s 00:26:09.742 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.742 11:09:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:09.742 11:09:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:09.742 11:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:09.742 11:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.742 11:09:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:10.026 ************************************ 00:26:10.026 START TEST kernel_target_abort 00:26:10.026 ************************************ 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:10.026 11:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:10.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:10.592 Waiting for block devices as requested 00:26:10.592 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:10.592 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:10.851 No valid GPT data, bailing 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:10.851 No valid GPT data, bailing 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:10.851 No valid GPT data, bailing 00:26:10.851 11:09:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:26:10.851 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:11.111 No valid GPT data, bailing 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 --hostid=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 -a 10.0.0.1 -t tcp -s 4420 00:26:11.111 00:26:11.111 Discovery Log Number of Records 2, Generation counter 2 00:26:11.111 =====Discovery Log Entry 0====== 00:26:11.111 trtype: tcp 00:26:11.111 adrfam: ipv4 00:26:11.111 subtype: current discovery subsystem 00:26:11.111 treq: not specified, sq flow control disable supported 00:26:11.111 portid: 1 00:26:11.111 trsvcid: 4420 00:26:11.111 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:11.111 traddr: 10.0.0.1 00:26:11.111 eflags: none 00:26:11.111 sectype: none 00:26:11.111 =====Discovery Log Entry 1====== 00:26:11.111 trtype: tcp 00:26:11.111 adrfam: ipv4 00:26:11.111 subtype: nvme subsystem 00:26:11.111 treq: not specified, sq flow control disable supported 00:26:11.111 portid: 1 00:26:11.111 trsvcid: 4420 00:26:11.111 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:11.111 traddr: 10.0.0.1 00:26:11.111 eflags: none 00:26:11.111 sectype: none 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:11.111 11:09:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:14.401 Initializing NVMe Controllers 00:26:14.401 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:14.401 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:14.401 Initialization complete. Launching workers. 00:26:14.401 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37447, failed: 0 00:26:14.401 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37447, failed to submit 0 00:26:14.401 success 0, unsuccessful 37447, failed 0 00:26:14.401 11:09:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:14.401 11:09:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:17.681 Initializing NVMe Controllers 00:26:17.681 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:17.681 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:17.681 Initialization complete. Launching workers. 00:26:17.681 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80023, failed: 0 00:26:17.681 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39322, failed to submit 40701 00:26:17.681 success 0, unsuccessful 39322, failed 0 00:26:17.681 11:09:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:17.681 11:09:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:20.962 Initializing NVMe Controllers 00:26:20.962 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:20.962 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:20.962 Initialization complete. Launching workers. 00:26:20.962 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107271, failed: 0 00:26:20.962 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26808, failed to submit 80463 00:26:20.962 success 0, unsuccessful 26808, failed 0 00:26:20.962 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:20.962 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:20.962 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:26:20.963 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:20.963 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:20.963 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:20.963 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:20.963 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:26:20.963 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:26:20.963 11:09:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:21.530 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:24.085 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:24.085 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:24.344 00:26:24.344 real 0m14.345s 00:26:24.344 user 0m6.391s 00:26:24.344 sys 0m5.400s 00:26:24.344 ************************************ 00:26:24.344 END TEST kernel_target_abort 00:26:24.344 ************************************ 00:26:24.344 11:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.344 11:09:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:24.344 rmmod nvme_tcp 00:26:24.344 rmmod nvme_fabrics 00:26:24.344 rmmod nvme_keyring 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 84602 ']' 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 84602 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84602 ']' 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84602 00:26:24.344 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84602) - No such process 00:26:24.344 Process with pid 84602 is not found 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84602 is not found' 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:26:24.344 11:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:24.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:24.910 Waiting for block devices as requested 00:26:24.910 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:25.168 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@254 -- # local dev 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:25.168 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:25.169 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:25.427 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:25.427 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:25.427 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # continue 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # continue 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/setup.sh@274 -- # iptr 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-save 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-restore 00:26:25.428 00:26:25.428 real 0m29.457s 00:26:25.428 user 0m50.964s 00:26:25.428 sys 0m10.452s 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.428 11:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:25.428 ************************************ 00:26:25.428 END TEST nvmf_abort_qd_sizes 00:26:25.428 ************************************ 00:26:25.428 11:09:52 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:25.428 11:09:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:25.428 11:09:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.428 11:09:52 -- common/autotest_common.sh@10 -- # set +x 00:26:25.428 ************************************ 00:26:25.428 START TEST keyring_file 00:26:25.428 ************************************ 00:26:25.428 11:09:52 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:26:25.687 * Looking for test storage... 00:26:25.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@345 -- # : 1 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@353 -- # local d=1 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@355 -- # echo 1 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@353 -- # local d=2 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@355 -- # echo 2 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:25.687 11:09:52 keyring_file -- scripts/common.sh@368 -- # return 0 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:25.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.687 --rc genhtml_branch_coverage=1 00:26:25.687 --rc genhtml_function_coverage=1 00:26:25.687 --rc genhtml_legend=1 00:26:25.687 --rc geninfo_all_blocks=1 00:26:25.687 --rc geninfo_unexecuted_blocks=1 00:26:25.687 00:26:25.687 ' 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:25.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.687 --rc genhtml_branch_coverage=1 00:26:25.687 --rc genhtml_function_coverage=1 00:26:25.687 --rc genhtml_legend=1 00:26:25.687 --rc geninfo_all_blocks=1 00:26:25.687 --rc geninfo_unexecuted_blocks=1 00:26:25.687 00:26:25.687 ' 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:25.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.687 --rc genhtml_branch_coverage=1 00:26:25.687 --rc genhtml_function_coverage=1 00:26:25.687 --rc genhtml_legend=1 00:26:25.687 --rc geninfo_all_blocks=1 00:26:25.687 --rc geninfo_unexecuted_blocks=1 00:26:25.687 00:26:25.687 ' 00:26:25.687 11:09:52 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:25.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:25.688 --rc genhtml_branch_coverage=1 00:26:25.688 --rc genhtml_function_coverage=1 00:26:25.688 --rc genhtml_legend=1 00:26:25.688 --rc geninfo_all_blocks=1 00:26:25.688 --rc geninfo_unexecuted_blocks=1 00:26:25.688 00:26:25.688 ' 00:26:25.688 11:09:52 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:25.688 11:09:52 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:26:25.688 11:09:52 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.688 11:09:52 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.688 11:09:52 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.688 11:09:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.688 11:09:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.688 11:09:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.688 11:09:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:25.688 11:09:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:25.688 11:09:52 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:25.688 11:09:52 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:25.688 11:09:52 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@50 -- # : 0 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:25.688 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:25.688 11:09:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:25.688 11:09:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:25.688 11:09:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:25.688 11:09:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:25.688 11:09:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:25.688 11:09:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BybgQpcVih 00:26:25.688 11:09:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:26:25.688 11:09:52 keyring_file -- nvmf/common.sh@507 -- # python - 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BybgQpcVih 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BybgQpcVih 00:26:25.947 11:09:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.BybgQpcVih 00:26:25.947 11:09:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LRYOqE6Nlm 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:25.947 11:09:52 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:25.947 11:09:52 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:26:25.947 11:09:52 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:26:25.947 11:09:52 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:26:25.947 11:09:52 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:26:25.947 11:09:52 keyring_file -- nvmf/common.sh@507 -- # python - 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LRYOqE6Nlm 00:26:25.947 11:09:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LRYOqE6Nlm 00:26:25.947 11:09:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LRYOqE6Nlm 00:26:25.947 11:09:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=85524 00:26:25.947 11:09:52 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:25.947 11:09:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85524 00:26:25.947 11:09:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85524 ']' 00:26:25.948 11:09:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.948 11:09:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.948 11:09:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.948 11:09:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.948 11:09:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:25.948 [2024-12-05 11:09:52.978647] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:26:25.948 [2024-12-05 11:09:52.978728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85524 ] 00:26:26.206 [2024-12-05 11:09:53.129717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.206 [2024-12-05 11:09:53.179926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.206 [2024-12-05 11:09:53.236035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:26.774 11:09:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:26.774 [2024-12-05 11:09:53.853459] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.774 null0 00:26:26.774 [2024-12-05 11:09:53.885379] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:26.774 [2024-12-05 11:09:53.885539] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.774 11:09:53 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:26.774 [2024-12-05 11:09:53.917374] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:26.774 request: 00:26:26.774 { 00:26:26.774 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.774 "secure_channel": false, 00:26:26.774 "listen_address": { 00:26:26.774 "trtype": "tcp", 00:26:26.774 "traddr": "127.0.0.1", 00:26:26.774 "trsvcid": "4420" 00:26:26.774 }, 00:26:26.774 "method": "nvmf_subsystem_add_listener", 00:26:26.774 "req_id": 1 00:26:26.774 } 00:26:26.774 Got JSON-RPC error response 00:26:26.774 response: 00:26:26.774 { 00:26:26.774 "code": -32602, 00:26:26.774 "message": "Invalid parameters" 00:26:26.774 } 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:26.774 11:09:53 keyring_file -- keyring/file.sh@47 -- # bperfpid=85541 00:26:26.774 11:09:53 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:26.774 11:09:53 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85541 /var/tmp/bperf.sock 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85541 ']' 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.774 11:09:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:27.033 [2024-12-05 11:09:53.978632] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:26:27.033 [2024-12-05 11:09:53.978706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85541 ] 00:26:27.033 [2024-12-05 11:09:54.129841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.033 [2024-12-05 11:09:54.179910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.293 [2024-12-05 11:09:54.221279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:27.858 11:09:54 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.858 11:09:54 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:27.858 11:09:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BybgQpcVih 00:26:27.858 11:09:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BybgQpcVih 00:26:28.117 11:09:55 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LRYOqE6Nlm 00:26:28.117 11:09:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LRYOqE6Nlm 00:26:28.375 11:09:55 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:26:28.375 11:09:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:28.375 11:09:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:28.375 11:09:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:28.375 11:09:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:28.633 11:09:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.BybgQpcVih == \/\t\m\p\/\t\m\p\.\B\y\b\g\Q\p\c\V\i\h ]] 00:26:28.633 11:09:55 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:26:28.633 11:09:55 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:26:28.633 11:09:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:28.633 11:09:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:28.633 11:09:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:28.891 11:09:55 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.LRYOqE6Nlm == \/\t\m\p\/\t\m\p\.\L\R\Y\O\q\E\6\N\l\m ]] 00:26:28.891 11:09:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:26:28.891 11:09:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:28.891 11:09:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:28.891 11:09:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:28.891 11:09:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:28.891 11:09:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:29.150 11:09:56 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:29.150 11:09:56 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:26:29.150 11:09:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:29.150 11:09:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:29.150 11:09:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:29.150 11:09:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:29.150 11:09:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:29.409 11:09:56 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:26:29.409 11:09:56 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:29.409 11:09:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:29.409 [2024-12-05 11:09:56.495227] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:29.409 nvme0n1 00:26:29.668 11:09:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:29.668 11:09:56 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:26:29.668 11:09:56 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:29.668 11:09:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:29.927 11:09:57 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:26:29.927 11:09:57 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.185 Running I/O for 1 seconds... 00:26:31.122 16955.00 IOPS, 66.23 MiB/s 00:26:31.122 Latency(us) 00:26:31.122 [2024-12-05T11:09:58.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.122 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:31.122 nvme0n1 : 1.01 16995.35 66.39 0.00 0.00 7515.95 3342.60 11475.38 00:26:31.122 [2024-12-05T11:09:58.281Z] =================================================================================================================== 00:26:31.122 [2024-12-05T11:09:58.281Z] Total : 16995.35 66.39 0.00 0.00 7515.95 3342.60 11475.38 00:26:31.122 { 00:26:31.122 "results": [ 00:26:31.122 { 00:26:31.122 "job": "nvme0n1", 00:26:31.122 "core_mask": "0x2", 00:26:31.122 "workload": "randrw", 00:26:31.122 "percentage": 50, 00:26:31.122 "status": "finished", 00:26:31.122 "queue_depth": 128, 00:26:31.122 "io_size": 4096, 00:26:31.122 "runtime": 1.005216, 00:26:31.122 "iops": 16995.352242702065, 00:26:31.122 "mibps": 66.38809469805494, 00:26:31.122 "io_failed": 0, 00:26:31.122 "io_timeout": 0, 00:26:31.122 "avg_latency_us": 7515.952538414017, 00:26:31.122 "min_latency_us": 3342.5991967871487, 00:26:31.122 "max_latency_us": 11475.379919678715 00:26:31.122 } 00:26:31.122 ], 00:26:31.122 "core_count": 1 00:26:31.122 } 00:26:31.122 11:09:58 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:31.122 11:09:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:31.381 11:09:58 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:26:31.381 11:09:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:31.381 11:09:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:31.381 11:09:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:31.381 11:09:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:31.381 11:09:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:31.641 11:09:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:31.641 11:09:58 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:26:31.641 11:09:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:31.641 11:09:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:31.641 11:09:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:31.641 11:09:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:31.641 11:09:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:31.900 11:09:58 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:26:31.900 11:09:58 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:31.900 11:09:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:31.900 11:09:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:31.900 11:09:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:31.900 11:09:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:31.900 11:09:58 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:31.900 11:09:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:31.900 11:09:58 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:31.900 11:09:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:32.160 [2024-12-05 11:09:59.089430] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:32.160 [2024-12-05 11:09:59.090095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa45d0 (107): Transport endpoint is not connected 00:26:32.160 [2024-12-05 11:09:59.091083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa45d0 (9): Bad file descriptor 00:26:32.160 [2024-12-05 11:09:59.092081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:32.160 [2024-12-05 11:09:59.092104] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:32.160 [2024-12-05 11:09:59.092114] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:32.160 [2024-12-05 11:09:59.092124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:32.160 request: 00:26:32.160 { 00:26:32.160 "name": "nvme0", 00:26:32.160 "trtype": "tcp", 00:26:32.160 "traddr": "127.0.0.1", 00:26:32.160 "adrfam": "ipv4", 00:26:32.160 "trsvcid": "4420", 00:26:32.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:32.160 "prchk_reftag": false, 00:26:32.160 "prchk_guard": false, 00:26:32.160 "hdgst": false, 00:26:32.160 "ddgst": false, 00:26:32.160 "psk": "key1", 00:26:32.160 "allow_unrecognized_csi": false, 00:26:32.160 "method": "bdev_nvme_attach_controller", 00:26:32.160 "req_id": 1 00:26:32.160 } 00:26:32.160 Got JSON-RPC error response 00:26:32.160 response: 00:26:32.160 { 00:26:32.160 "code": -5, 00:26:32.160 "message": "Input/output error" 00:26:32.160 } 00:26:32.160 11:09:59 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:32.160 11:09:59 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:32.160 11:09:59 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:32.160 11:09:59 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:32.160 11:09:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:26:32.160 11:09:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:32.160 11:09:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:32.160 11:09:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:32.160 11:09:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:32.160 11:09:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:32.420 11:09:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:32.420 11:09:59 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:26:32.420 11:09:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:32.420 11:09:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:32.420 11:09:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:32.420 11:09:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:32.420 11:09:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:32.420 11:09:59 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:26:32.420 11:09:59 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:26:32.420 11:09:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:32.679 11:09:59 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:26:32.679 11:09:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:32.938 11:09:59 keyring_file -- keyring/file.sh@78 -- # jq length 00:26:32.938 11:09:59 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:26:32.938 11:09:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:33.196 11:10:00 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:26:33.196 11:10:00 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.BybgQpcVih 00:26:33.196 11:10:00 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.BybgQpcVih 00:26:33.196 11:10:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:33.196 11:10:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.BybgQpcVih 00:26:33.196 11:10:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:33.196 11:10:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:33.196 11:10:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:33.196 11:10:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:33.197 11:10:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BybgQpcVih 00:26:33.197 11:10:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BybgQpcVih 00:26:33.197 [2024-12-05 11:10:00.345412] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.BybgQpcVih': 0100660 00:26:33.197 [2024-12-05 11:10:00.345450] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:33.197 request: 00:26:33.197 { 00:26:33.197 "name": "key0", 00:26:33.197 "path": "/tmp/tmp.BybgQpcVih", 00:26:33.197 "method": "keyring_file_add_key", 00:26:33.197 "req_id": 1 00:26:33.197 } 00:26:33.197 Got JSON-RPC error response 00:26:33.197 response: 00:26:33.197 { 00:26:33.197 "code": -1, 00:26:33.197 "message": "Operation not permitted" 00:26:33.197 } 00:26:33.455 11:10:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:33.455 11:10:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:33.455 11:10:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:33.455 11:10:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:33.455 11:10:00 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.BybgQpcVih 00:26:33.455 11:10:00 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BybgQpcVih 00:26:33.455 11:10:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BybgQpcVih 00:26:33.455 11:10:00 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.BybgQpcVih 00:26:33.455 11:10:00 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:26:33.455 11:10:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:33.455 11:10:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:33.455 11:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:33.455 11:10:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:33.455 11:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:33.714 11:10:00 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:26:33.714 11:10:00 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:33.714 11:10:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:26:33.714 11:10:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:33.714 11:10:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:33.714 11:10:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:33.714 11:10:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:33.714 11:10:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:33.714 11:10:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:33.714 11:10:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:33.972 [2024-12-05 11:10:01.004487] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.BybgQpcVih': No such file or directory 00:26:33.972 [2024-12-05 11:10:01.004530] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:33.972 [2024-12-05 11:10:01.004550] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:33.972 [2024-12-05 11:10:01.004558] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:26:33.972 [2024-12-05 11:10:01.004569] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:33.972 [2024-12-05 11:10:01.004577] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:33.972 request: 00:26:33.972 { 00:26:33.972 "name": "nvme0", 00:26:33.972 "trtype": "tcp", 00:26:33.972 "traddr": "127.0.0.1", 00:26:33.972 "adrfam": "ipv4", 00:26:33.972 "trsvcid": "4420", 00:26:33.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:33.972 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:33.972 "prchk_reftag": false, 00:26:33.972 "prchk_guard": false, 00:26:33.972 "hdgst": false, 00:26:33.972 "ddgst": false, 00:26:33.972 "psk": "key0", 00:26:33.972 "allow_unrecognized_csi": false, 00:26:33.972 "method": "bdev_nvme_attach_controller", 00:26:33.972 "req_id": 1 00:26:33.972 } 00:26:33.972 Got JSON-RPC error response 00:26:33.972 response: 00:26:33.972 { 00:26:33.972 "code": -19, 00:26:33.972 "message": "No such device" 00:26:33.972 } 00:26:33.972 11:10:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:26:33.972 11:10:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:33.972 11:10:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:33.972 11:10:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:33.972 11:10:01 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:26:33.972 11:10:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:34.230 11:10:01 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YTM7XwnCGP 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:34.230 11:10:01 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:34.230 11:10:01 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:26:34.230 11:10:01 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:26:34.230 11:10:01 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:26:34.230 11:10:01 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:26:34.230 11:10:01 keyring_file -- nvmf/common.sh@507 -- # python - 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YTM7XwnCGP 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YTM7XwnCGP 00:26:34.230 11:10:01 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.YTM7XwnCGP 00:26:34.230 11:10:01 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YTM7XwnCGP 00:26:34.230 11:10:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YTM7XwnCGP 00:26:34.492 11:10:01 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:34.492 11:10:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:34.750 nvme0n1 00:26:34.750 11:10:01 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:26:34.750 11:10:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:34.750 11:10:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:34.750 11:10:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:34.750 11:10:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:34.750 11:10:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:35.043 11:10:02 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:26:35.043 11:10:02 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:26:35.043 11:10:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:35.339 11:10:02 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:26:35.339 11:10:02 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:26:35.339 11:10:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:35.339 11:10:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:35.339 11:10:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:35.339 11:10:02 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:26:35.339 11:10:02 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:26:35.339 11:10:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:35.339 11:10:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:35.339 11:10:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:35.339 11:10:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:35.339 11:10:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:35.597 11:10:02 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:26:35.597 11:10:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:35.597 11:10:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:35.854 11:10:02 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:26:35.854 11:10:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:35.854 11:10:02 keyring_file -- keyring/file.sh@105 -- # jq length 00:26:36.112 11:10:03 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:26:36.112 11:10:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.YTM7XwnCGP 00:26:36.112 11:10:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.YTM7XwnCGP 00:26:36.370 11:10:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LRYOqE6Nlm 00:26:36.370 11:10:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LRYOqE6Nlm 00:26:36.628 11:10:03 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:36.628 11:10:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:36.887 nvme0n1 00:26:36.887 11:10:03 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:26:36.887 11:10:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:37.145 11:10:04 keyring_file -- keyring/file.sh@113 -- # config='{ 00:26:37.145 "subsystems": [ 00:26:37.145 { 00:26:37.145 "subsystem": "keyring", 00:26:37.145 "config": [ 00:26:37.145 { 00:26:37.145 "method": "keyring_file_add_key", 00:26:37.145 "params": { 00:26:37.145 "name": "key0", 00:26:37.145 "path": "/tmp/tmp.YTM7XwnCGP" 00:26:37.145 } 00:26:37.145 }, 00:26:37.145 { 00:26:37.145 "method": "keyring_file_add_key", 00:26:37.145 "params": { 00:26:37.145 "name": "key1", 00:26:37.145 "path": "/tmp/tmp.LRYOqE6Nlm" 00:26:37.145 } 00:26:37.145 } 00:26:37.145 ] 00:26:37.145 }, 00:26:37.145 { 00:26:37.145 "subsystem": "iobuf", 00:26:37.145 "config": [ 00:26:37.145 { 00:26:37.145 "method": "iobuf_set_options", 00:26:37.145 "params": { 00:26:37.145 "small_pool_count": 8192, 00:26:37.145 "large_pool_count": 1024, 00:26:37.145 "small_bufsize": 8192, 00:26:37.145 "large_bufsize": 135168, 00:26:37.145 "enable_numa": false 00:26:37.145 } 00:26:37.145 } 00:26:37.145 ] 00:26:37.145 }, 00:26:37.145 { 00:26:37.145 "subsystem": "sock", 00:26:37.145 "config": [ 00:26:37.145 { 00:26:37.145 "method": "sock_set_default_impl", 00:26:37.145 "params": { 00:26:37.145 "impl_name": "uring" 00:26:37.145 } 00:26:37.145 }, 00:26:37.145 { 00:26:37.145 "method": "sock_impl_set_options", 00:26:37.145 "params": { 00:26:37.145 "impl_name": "ssl", 00:26:37.145 "recv_buf_size": 4096, 00:26:37.145 "send_buf_size": 4096, 00:26:37.145 "enable_recv_pipe": true, 00:26:37.145 "enable_quickack": false, 00:26:37.145 "enable_placement_id": 0, 00:26:37.145 "enable_zerocopy_send_server": true, 00:26:37.145 "enable_zerocopy_send_client": false, 00:26:37.145 "zerocopy_threshold": 0, 00:26:37.145 "tls_version": 0, 00:26:37.145 "enable_ktls": false 00:26:37.145 } 00:26:37.145 }, 00:26:37.145 { 00:26:37.145 "method": "sock_impl_set_options", 00:26:37.145 "params": { 00:26:37.145 "impl_name": "posix", 00:26:37.145 "recv_buf_size": 2097152, 00:26:37.145 "send_buf_size": 2097152, 00:26:37.145 "enable_recv_pipe": true, 00:26:37.145 "enable_quickack": false, 00:26:37.145 "enable_placement_id": 0, 00:26:37.145 "enable_zerocopy_send_server": true, 00:26:37.145 "enable_zerocopy_send_client": false, 00:26:37.145 "zerocopy_threshold": 0, 00:26:37.145 "tls_version": 0, 00:26:37.145 "enable_ktls": false 00:26:37.145 } 00:26:37.145 }, 00:26:37.145 { 00:26:37.145 "method": "sock_impl_set_options", 00:26:37.145 "params": { 00:26:37.145 "impl_name": "uring", 00:26:37.145 "recv_buf_size": 2097152, 00:26:37.145 "send_buf_size": 2097152, 00:26:37.145 "enable_recv_pipe": true, 00:26:37.145 "enable_quickack": false, 00:26:37.145 "enable_placement_id": 0, 00:26:37.145 "enable_zerocopy_send_server": false, 00:26:37.145 "enable_zerocopy_send_client": false, 00:26:37.145 "zerocopy_threshold": 0, 00:26:37.145 "tls_version": 0, 00:26:37.145 "enable_ktls": false 00:26:37.145 } 00:26:37.145 } 00:26:37.145 ] 00:26:37.145 }, 00:26:37.145 { 00:26:37.145 "subsystem": "vmd", 00:26:37.145 "config": [] 00:26:37.145 }, 00:26:37.145 { 00:26:37.146 "subsystem": "accel", 00:26:37.146 "config": [ 00:26:37.146 { 00:26:37.146 "method": "accel_set_options", 00:26:37.146 "params": { 00:26:37.146 "small_cache_size": 128, 00:26:37.146 "large_cache_size": 16, 00:26:37.146 "task_count": 2048, 00:26:37.146 "sequence_count": 2048, 00:26:37.146 "buf_count": 2048 00:26:37.146 } 00:26:37.146 } 00:26:37.146 ] 00:26:37.146 }, 00:26:37.146 { 00:26:37.146 "subsystem": "bdev", 00:26:37.146 "config": [ 00:26:37.146 { 00:26:37.146 "method": "bdev_set_options", 00:26:37.146 "params": { 00:26:37.146 "bdev_io_pool_size": 65535, 00:26:37.146 "bdev_io_cache_size": 256, 00:26:37.146 "bdev_auto_examine": true, 00:26:37.146 "iobuf_small_cache_size": 128, 00:26:37.146 "iobuf_large_cache_size": 16 00:26:37.146 } 00:26:37.146 }, 00:26:37.146 { 00:26:37.146 "method": "bdev_raid_set_options", 00:26:37.146 "params": { 00:26:37.146 "process_window_size_kb": 1024, 00:26:37.146 "process_max_bandwidth_mb_sec": 0 00:26:37.146 } 00:26:37.146 }, 00:26:37.146 { 00:26:37.146 "method": "bdev_iscsi_set_options", 00:26:37.146 "params": { 00:26:37.146 "timeout_sec": 30 00:26:37.146 } 00:26:37.146 }, 00:26:37.146 { 00:26:37.146 "method": "bdev_nvme_set_options", 00:26:37.146 "params": { 00:26:37.146 "action_on_timeout": "none", 00:26:37.146 "timeout_us": 0, 00:26:37.146 "timeout_admin_us": 0, 00:26:37.146 "keep_alive_timeout_ms": 10000, 00:26:37.146 "arbitration_burst": 0, 00:26:37.146 "low_priority_weight": 0, 00:26:37.146 "medium_priority_weight": 0, 00:26:37.146 "high_priority_weight": 0, 00:26:37.146 "nvme_adminq_poll_period_us": 10000, 00:26:37.146 "nvme_ioq_poll_period_us": 0, 00:26:37.146 "io_queue_requests": 512, 00:26:37.146 "delay_cmd_submit": true, 00:26:37.146 "transport_retry_count": 4, 00:26:37.146 "bdev_retry_count": 3, 00:26:37.146 "transport_ack_timeout": 0, 00:26:37.146 "ctrlr_loss_timeout_sec": 0, 00:26:37.146 "reconnect_delay_sec": 0, 00:26:37.146 "fast_io_fail_timeout_sec": 0, 00:26:37.146 "disable_auto_failback": false, 00:26:37.146 "generate_uuids": false, 00:26:37.146 "transport_tos": 0, 00:26:37.146 "nvme_error_stat": false, 00:26:37.146 "rdma_srq_size": 0, 00:26:37.146 "io_path_stat": false, 00:26:37.146 "allow_accel_sequence": false, 00:26:37.146 "rdma_max_cq_size": 0, 00:26:37.146 "rdma_cm_event_timeout_ms": 0, 00:26:37.146 "dhchap_digests": [ 00:26:37.146 "sha256", 00:26:37.146 "sha384", 00:26:37.146 "sha512" 00:26:37.146 ], 00:26:37.146 "dhchap_dhgroups": [ 00:26:37.146 "null", 00:26:37.146 "ffdhe2048", 00:26:37.146 "ffdhe3072", 00:26:37.146 "ffdhe4096", 00:26:37.146 "ffdhe6144", 00:26:37.146 "ffdhe8192" 00:26:37.146 ] 00:26:37.146 } 00:26:37.146 }, 00:26:37.146 { 00:26:37.146 "method": "bdev_nvme_attach_controller", 00:26:37.146 "params": { 00:26:37.146 "name": "nvme0", 00:26:37.146 "trtype": "TCP", 00:26:37.146 "adrfam": "IPv4", 00:26:37.146 "traddr": "127.0.0.1", 00:26:37.146 "trsvcid": "4420", 00:26:37.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:37.146 "prchk_reftag": false, 00:26:37.146 "prchk_guard": false, 00:26:37.146 "ctrlr_loss_timeout_sec": 0, 00:26:37.146 "reconnect_delay_sec": 0, 00:26:37.146 "fast_io_fail_timeout_sec": 0, 00:26:37.146 "psk": "key0", 00:26:37.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:37.146 "hdgst": false, 00:26:37.146 "ddgst": false, 00:26:37.146 "multipath": "multipath" 00:26:37.146 } 00:26:37.146 }, 00:26:37.146 { 00:26:37.146 "method": "bdev_nvme_set_hotplug", 00:26:37.146 "params": { 00:26:37.146 "period_us": 100000, 00:26:37.146 "enable": false 00:26:37.146 } 00:26:37.146 }, 00:26:37.146 { 00:26:37.146 "method": "bdev_wait_for_examine" 00:26:37.146 } 00:26:37.146 ] 00:26:37.146 }, 00:26:37.146 { 00:26:37.146 "subsystem": "nbd", 00:26:37.146 "config": [] 00:26:37.146 } 00:26:37.146 ] 00:26:37.146 }' 00:26:37.146 11:10:04 keyring_file -- keyring/file.sh@115 -- # killprocess 85541 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85541 ']' 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85541 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85541 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:37.146 killing process with pid 85541 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85541' 00:26:37.146 Received shutdown signal, test time was about 1.000000 seconds 00:26:37.146 00:26:37.146 Latency(us) 00:26:37.146 [2024-12-05T11:10:04.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.146 [2024-12-05T11:10:04.305Z] =================================================================================================================== 00:26:37.146 [2024-12-05T11:10:04.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@973 -- # kill 85541 00:26:37.146 11:10:04 keyring_file -- common/autotest_common.sh@978 -- # wait 85541 00:26:37.405 11:10:04 keyring_file -- keyring/file.sh@118 -- # bperfpid=85780 00:26:37.405 11:10:04 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85780 /var/tmp/bperf.sock 00:26:37.405 11:10:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85780 ']' 00:26:37.405 11:10:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.405 11:10:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:37.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.406 11:10:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.406 11:10:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:37.406 11:10:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:37.406 11:10:04 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:26:37.406 "subsystems": [ 00:26:37.406 { 00:26:37.406 "subsystem": "keyring", 00:26:37.406 "config": [ 00:26:37.406 { 00:26:37.406 "method": "keyring_file_add_key", 00:26:37.406 "params": { 00:26:37.406 "name": "key0", 00:26:37.406 "path": "/tmp/tmp.YTM7XwnCGP" 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "keyring_file_add_key", 00:26:37.406 "params": { 00:26:37.406 "name": "key1", 00:26:37.406 "path": "/tmp/tmp.LRYOqE6Nlm" 00:26:37.406 } 00:26:37.406 } 00:26:37.406 ] 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "subsystem": "iobuf", 00:26:37.406 "config": [ 00:26:37.406 { 00:26:37.406 "method": "iobuf_set_options", 00:26:37.406 "params": { 00:26:37.406 "small_pool_count": 8192, 00:26:37.406 "large_pool_count": 1024, 00:26:37.406 "small_bufsize": 8192, 00:26:37.406 "large_bufsize": 135168, 00:26:37.406 "enable_numa": false 00:26:37.406 } 00:26:37.406 } 00:26:37.406 ] 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "subsystem": "sock", 00:26:37.406 "config": [ 00:26:37.406 { 00:26:37.406 "method": "sock_set_default_impl", 00:26:37.406 "params": { 00:26:37.406 "impl_name": "uring" 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "sock_impl_set_options", 00:26:37.406 "params": { 00:26:37.406 "impl_name": "ssl", 00:26:37.406 "recv_buf_size": 4096, 00:26:37.406 "send_buf_size": 4096, 00:26:37.406 "enable_recv_pipe": true, 00:26:37.406 "enable_quickack": false, 00:26:37.406 "enable_placement_id": 0, 00:26:37.406 "enable_zerocopy_send_server": true, 00:26:37.406 "enable_zerocopy_send_client": false, 00:26:37.406 "zerocopy_threshold": 0, 00:26:37.406 "tls_version": 0, 00:26:37.406 "enable_ktls": false 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "sock_impl_set_options", 00:26:37.406 "params": { 00:26:37.406 "impl_name": "posix", 00:26:37.406 "recv_buf_size": 2097152, 00:26:37.406 "send_buf_size": 2097152, 00:26:37.406 "enable_recv_pipe": true, 00:26:37.406 "enable_quickack": false, 00:26:37.406 "enable_placement_id": 0, 00:26:37.406 "enable_zerocopy_send_server": true, 00:26:37.406 "enable_zerocopy_send_client": false, 00:26:37.406 "zerocopy_threshold": 0, 00:26:37.406 "tls_version": 0, 00:26:37.406 "enable_ktls": false 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "sock_impl_set_options", 00:26:37.406 "params": { 00:26:37.406 "impl_name": "uring", 00:26:37.406 "recv_buf_size": 2097152, 00:26:37.406 "send_buf_size": 2097152, 00:26:37.406 "enable_recv_pipe": true, 00:26:37.406 "enable_quickack": false, 00:26:37.406 "enable_placement_id": 0, 00:26:37.406 "enable_zerocopy_send_server": false, 00:26:37.406 "enable_zerocopy_send_client": false, 00:26:37.406 "zerocopy_threshold": 0, 00:26:37.406 "tls_version": 0, 00:26:37.406 "enable_ktls": false 00:26:37.406 } 00:26:37.406 } 00:26:37.406 ] 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "subsystem": "vmd", 00:26:37.406 "config": [] 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "subsystem": "accel", 00:26:37.406 "config": [ 00:26:37.406 { 00:26:37.406 "method": "accel_set_options", 00:26:37.406 "params": { 00:26:37.406 "small_cache_size": 128, 00:26:37.406 "large_cache_size": 16, 00:26:37.406 "task_count": 2048, 00:26:37.406 "sequence_count": 2048, 00:26:37.406 "buf_count": 2048 00:26:37.406 } 00:26:37.406 } 00:26:37.406 ] 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "subsystem": "bdev", 00:26:37.406 "config": [ 00:26:37.406 { 00:26:37.406 "method": "bdev_set_options", 00:26:37.406 "params": { 00:26:37.406 "bdev_io_pool_size": 65535, 00:26:37.406 "bdev_io_cache_size": 256, 00:26:37.406 "bdev_auto_examine": true, 00:26:37.406 "iobuf_small_cache_size": 128, 00:26:37.406 "iobuf_large_cache_size": 16 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "bdev_raid_set_options", 00:26:37.406 "params": { 00:26:37.406 "process_window_size_kb": 1024, 00:26:37.406 "process_max_bandwidth_mb_sec": 0 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "bdev_iscsi_set_options", 00:26:37.406 "params": { 00:26:37.406 "timeout_sec": 30 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "bdev_nvme_set_options", 00:26:37.406 "params": { 00:26:37.406 "action_on_timeout": "none", 00:26:37.406 "timeout_us": 0, 00:26:37.406 "timeout_admin_us": 0, 00:26:37.406 "keep_alive_timeout_ms": 10000, 00:26:37.406 "arbitration_burst": 0, 00:26:37.406 "low_priority_weight": 0, 00:26:37.406 "medium_priority_weight": 0, 00:26:37.406 "high_priority_weight": 0, 00:26:37.406 "nvme_adminq_poll_period_us": 10000, 00:26:37.406 "nvme_ioq_poll_period_us": 0, 00:26:37.406 "io_queue_requests": 512, 00:26:37.406 "delay_cmd_submit": true, 00:26:37.406 "transport_retry_count": 4, 00:26:37.406 "bdev_retry_count": 3, 00:26:37.406 "transport_ack_timeout": 0, 00:26:37.406 "ctrlr_loss_timeout_sec": 0, 00:26:37.406 "reconnect_delay_sec": 0, 00:26:37.406 "fast_io_fail_timeout_sec": 0, 00:26:37.406 "disable_auto_failback": false, 00:26:37.406 "generate_uuids": false, 00:26:37.406 "transport_tos": 0, 00:26:37.406 "nvme_error_stat": false, 00:26:37.406 "rdma_srq_size": 0, 00:26:37.406 "io_path_stat": false, 00:26:37.406 "allow_accel_sequence": false, 00:26:37.406 "rdma_max_cq_size": 0, 00:26:37.406 "rdma_cm_event_timeout_ms": 0, 00:26:37.406 "dhchap_digests": [ 00:26:37.406 "sha256", 00:26:37.406 "sha384", 00:26:37.406 "sha512" 00:26:37.406 ], 00:26:37.406 "dhchap_dhgroups": [ 00:26:37.406 "null", 00:26:37.406 "ffdhe2048", 00:26:37.406 "ffdhe3072", 00:26:37.406 "ffdhe4096", 00:26:37.406 "ffdhe6144", 00:26:37.406 "ffdhe8192" 00:26:37.406 ] 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "bdev_nvme_attach_controller", 00:26:37.406 "params": { 00:26:37.406 "name": "nvme0", 00:26:37.406 "trtype": "TCP", 00:26:37.406 "adrfam": "IPv4", 00:26:37.406 "traddr": "127.0.0.1", 00:26:37.406 "trsvcid": "4420", 00:26:37.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:37.406 "prchk_reftag": false, 00:26:37.406 "prchk_guard": false, 00:26:37.406 "ctrlr_loss_timeout_sec": 0, 00:26:37.406 "reconnect_delay_sec": 0, 00:26:37.406 "fast_io_fail_timeout_sec": 0, 00:26:37.406 "psk": "key0", 00:26:37.406 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:37.406 "hdgst": false, 00:26:37.406 "ddgst": false, 00:26:37.406 "multipath": "multipath" 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "bdev_nvme_set_hotplug", 00:26:37.406 "params": { 00:26:37.406 "period_us": 100000, 00:26:37.406 "enable": false 00:26:37.406 } 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "method": "bdev_wait_for_examine" 00:26:37.406 } 00:26:37.406 ] 00:26:37.406 }, 00:26:37.406 { 00:26:37.406 "subsystem": "nbd", 00:26:37.406 "config": [] 00:26:37.406 } 00:26:37.406 ] 00:26:37.406 }' 00:26:37.406 11:10:04 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:37.406 [2024-12-05 11:10:04.437383] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:26:37.407 [2024-12-05 11:10:04.437460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85780 ] 00:26:37.665 [2024-12-05 11:10:04.591911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.665 [2024-12-05 11:10:04.644051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.665 [2024-12-05 11:10:04.766616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:37.665 [2024-12-05 11:10:04.817180] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:38.232 11:10:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.232 11:10:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:26:38.232 11:10:05 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:26:38.232 11:10:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:38.232 11:10:05 keyring_file -- keyring/file.sh@121 -- # jq length 00:26:38.490 11:10:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:38.490 11:10:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:26:38.490 11:10:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:38.490 11:10:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:38.490 11:10:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:38.490 11:10:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:38.490 11:10:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:38.749 11:10:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:26:38.749 11:10:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:26:38.749 11:10:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:38.749 11:10:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:38.749 11:10:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:38.749 11:10:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:38.749 11:10:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:39.010 11:10:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:26:39.010 11:10:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:26:39.010 11:10:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:26:39.010 11:10:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:39.272 11:10:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:26:39.272 11:10:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:39.272 11:10:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.YTM7XwnCGP /tmp/tmp.LRYOqE6Nlm 00:26:39.272 11:10:06 keyring_file -- keyring/file.sh@20 -- # killprocess 85780 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85780 ']' 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85780 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85780 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:39.272 killing process with pid 85780 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85780' 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@973 -- # kill 85780 00:26:39.272 Received shutdown signal, test time was about 1.000000 seconds 00:26:39.272 00:26:39.272 Latency(us) 00:26:39.272 [2024-12-05T11:10:06.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.272 [2024-12-05T11:10:06.431Z] =================================================================================================================== 00:26:39.272 [2024-12-05T11:10:06.431Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:39.272 11:10:06 keyring_file -- common/autotest_common.sh@978 -- # wait 85780 00:26:39.532 11:10:06 keyring_file -- keyring/file.sh@21 -- # killprocess 85524 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85524 ']' 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85524 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85524 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:39.532 killing process with pid 85524 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85524' 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@973 -- # kill 85524 00:26:39.532 11:10:06 keyring_file -- common/autotest_common.sh@978 -- # wait 85524 00:26:39.791 00:26:39.791 real 0m14.341s 00:26:39.791 user 0m34.174s 00:26:39.791 sys 0m3.395s 00:26:39.791 11:10:06 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.791 11:10:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:39.791 ************************************ 00:26:39.791 END TEST keyring_file 00:26:39.791 ************************************ 00:26:39.791 11:10:06 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:26:39.791 11:10:06 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:39.791 11:10:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.791 11:10:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.791 11:10:06 -- common/autotest_common.sh@10 -- # set +x 00:26:39.791 ************************************ 00:26:39.791 START TEST keyring_linux 00:26:39.791 ************************************ 00:26:39.791 11:10:06 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:26:39.791 Joined session keyring: 427844082 00:26:40.051 * Looking for test storage... 00:26:40.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@345 -- # : 1 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:40.051 11:10:07 keyring_linux -- scripts/common.sh@368 -- # return 0 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:40.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.051 --rc genhtml_branch_coverage=1 00:26:40.051 --rc genhtml_function_coverage=1 00:26:40.051 --rc genhtml_legend=1 00:26:40.051 --rc geninfo_all_blocks=1 00:26:40.051 --rc geninfo_unexecuted_blocks=1 00:26:40.051 00:26:40.051 ' 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:40.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.051 --rc genhtml_branch_coverage=1 00:26:40.051 --rc genhtml_function_coverage=1 00:26:40.051 --rc genhtml_legend=1 00:26:40.051 --rc geninfo_all_blocks=1 00:26:40.051 --rc geninfo_unexecuted_blocks=1 00:26:40.051 00:26:40.051 ' 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:40.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.051 --rc genhtml_branch_coverage=1 00:26:40.051 --rc genhtml_function_coverage=1 00:26:40.051 --rc genhtml_legend=1 00:26:40.051 --rc geninfo_all_blocks=1 00:26:40.051 --rc geninfo_unexecuted_blocks=1 00:26:40.051 00:26:40.051 ' 00:26:40.051 11:10:07 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:40.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:40.051 --rc genhtml_branch_coverage=1 00:26:40.051 --rc genhtml_function_coverage=1 00:26:40.051 --rc genhtml_legend=1 00:26:40.051 --rc geninfo_all_blocks=1 00:26:40.051 --rc geninfo_unexecuted_blocks=1 00:26:40.051 00:26:40.051 ' 00:26:40.051 11:10:07 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:26:40.051 11:10:07 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:40.051 11:10:07 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:26:40.052 11:10:07 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=e0ed187f-dfd9-4207-ba9a-ca4fd5a95a11 00:26:40.052 11:10:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.052 11:10:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:40.052 11:10:07 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:40.052 11:10:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.052 11:10:07 keyring_linux -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:40.052 11:10:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:26:40.052 11:10:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.052 11:10:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.052 11:10:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.052 11:10:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.311 11:10:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.311 11:10:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.311 11:10:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:26:40.312 11:10:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:40.312 11:10:07 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:40.312 11:10:07 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:40.312 11:10:07 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:40.312 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@507 -- # python - 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:26:40.312 /tmp/:spdk-test:key0 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:26:40.312 11:10:07 keyring_linux -- nvmf/common.sh@507 -- # python - 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:26:40.312 /tmp/:spdk-test:key1 00:26:40.312 11:10:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85902 00:26:40.312 11:10:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85902 00:26:40.312 11:10:07 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85902 ']' 00:26:40.312 11:10:07 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.312 11:10:07 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:40.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.312 11:10:07 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.312 11:10:07 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:40.312 11:10:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:40.312 [2024-12-05 11:10:07.383303] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:26:40.312 [2024-12-05 11:10:07.383374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85902 ] 00:26:40.572 [2024-12-05 11:10:07.534248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.572 [2024-12-05 11:10:07.584797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.572 [2024-12-05 11:10:07.642573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:26:41.593 11:10:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:41.593 [2024-12-05 11:10:08.335579] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.593 null0 00:26:41.593 [2024-12-05 11:10:08.367508] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:41.593 [2024-12-05 11:10:08.367680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.593 11:10:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:26:41.593 687593544 00:26:41.593 11:10:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:26:41.593 729036038 00:26:41.593 11:10:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85920 00:26:41.593 11:10:08 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:26:41.593 11:10:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85920 /var/tmp/bperf.sock 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85920 ']' 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.593 11:10:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:41.593 [2024-12-05 11:10:08.449045] Starting SPDK v25.01-pre git sha1 3a4e432ea / DPDK 24.03.0 initialization... 00:26:41.593 [2024-12-05 11:10:08.449135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85920 ] 00:26:41.593 [2024-12-05 11:10:08.604147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.593 [2024-12-05 11:10:08.658458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.530 11:10:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.530 11:10:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:26:42.530 11:10:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:26:42.530 11:10:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:26:42.530 11:10:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:26:42.530 11:10:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:42.789 [2024-12-05 11:10:09.810629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:42.789 11:10:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:42.789 11:10:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:26:43.049 [2024-12-05 11:10:10.072077] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:43.049 nvme0n1 00:26:43.049 11:10:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:26:43.049 11:10:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:26:43.049 11:10:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:43.049 11:10:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:43.049 11:10:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:43.049 11:10:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:43.308 11:10:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:26:43.308 11:10:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:43.308 11:10:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:26:43.308 11:10:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:26:43.308 11:10:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:43.309 11:10:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:43.309 11:10:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:26:43.568 11:10:10 keyring_linux -- keyring/linux.sh@25 -- # sn=687593544 00:26:43.568 11:10:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:26:43.568 11:10:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:43.568 11:10:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 687593544 == \6\8\7\5\9\3\5\4\4 ]] 00:26:43.568 11:10:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 687593544 00:26:43.568 11:10:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:26:43.568 11:10:10 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:43.827 Running I/O for 1 seconds... 00:26:44.764 19040.00 IOPS, 74.38 MiB/s 00:26:44.764 Latency(us) 00:26:44.764 [2024-12-05T11:10:11.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.764 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:44.764 nvme0n1 : 1.01 19039.56 74.37 0.00 0.00 6698.62 2276.65 8527.58 00:26:44.764 [2024-12-05T11:10:11.923Z] =================================================================================================================== 00:26:44.764 [2024-12-05T11:10:11.923Z] Total : 19039.56 74.37 0.00 0.00 6698.62 2276.65 8527.58 00:26:44.764 { 00:26:44.764 "results": [ 00:26:44.764 { 00:26:44.764 "job": "nvme0n1", 00:26:44.764 "core_mask": "0x2", 00:26:44.764 "workload": "randread", 00:26:44.764 "status": "finished", 00:26:44.764 "queue_depth": 128, 00:26:44.764 "io_size": 4096, 00:26:44.764 "runtime": 1.006746, 00:26:44.764 "iops": 19039.559134081486, 00:26:44.764 "mibps": 74.3732778675058, 00:26:44.764 "io_failed": 0, 00:26:44.764 "io_timeout": 0, 00:26:44.764 "avg_latency_us": 6698.616719968355, 00:26:44.764 "min_latency_us": 2276.6522088353413, 00:26:44.764 "max_latency_us": 8527.575903614457 00:26:44.764 } 00:26:44.764 ], 00:26:44.764 "core_count": 1 00:26:44.764 } 00:26:44.764 11:10:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:44.765 11:10:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:45.024 11:10:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:26:45.024 11:10:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:26:45.024 11:10:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:26:45.024 11:10:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:26:45.024 11:10:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:45.024 11:10:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:26:45.284 11:10:12 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:26:45.284 11:10:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:26:45.284 11:10:12 keyring_linux -- keyring/linux.sh@23 -- # return 00:26:45.284 11:10:12 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:45.284 11:10:12 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:26:45.284 11:10:12 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:45.284 11:10:12 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:26:45.284 11:10:12 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.284 11:10:12 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:26:45.284 11:10:12 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:45.284 11:10:12 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:45.284 11:10:12 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:26:45.284 [2024-12-05 11:10:12.422716] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:45.284 [2024-12-05 11:10:12.423106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ee5d0 (107): Transport endpoint is not connected 00:26:45.284 [2024-12-05 11:10:12.424095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ee5d0 (9): Bad file descriptor 00:26:45.284 [2024-12-05 11:10:12.425093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:26:45.284 [2024-12-05 11:10:12.425114] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:45.284 [2024-12-05 11:10:12.425123] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:26:45.284 [2024-12-05 11:10:12.425134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:26:45.284 request: 00:26:45.284 { 00:26:45.284 "name": "nvme0", 00:26:45.284 "trtype": "tcp", 00:26:45.284 "traddr": "127.0.0.1", 00:26:45.284 "adrfam": "ipv4", 00:26:45.284 "trsvcid": "4420", 00:26:45.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:45.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:45.284 "prchk_reftag": false, 00:26:45.284 "prchk_guard": false, 00:26:45.284 "hdgst": false, 00:26:45.284 "ddgst": false, 00:26:45.284 "psk": ":spdk-test:key1", 00:26:45.284 "allow_unrecognized_csi": false, 00:26:45.284 "method": "bdev_nvme_attach_controller", 00:26:45.284 "req_id": 1 00:26:45.284 } 00:26:45.284 Got JSON-RPC error response 00:26:45.284 response: 00:26:45.284 { 00:26:45.284 "code": -5, 00:26:45.284 "message": "Input/output error" 00:26:45.284 } 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@33 -- # sn=687593544 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 687593544 00:26:45.546 1 links removed 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@33 -- # sn=729036038 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 729036038 00:26:45.546 1 links removed 00:26:45.546 11:10:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85920 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85920 ']' 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85920 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85920 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:45.546 killing process with pid 85920 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85920' 00:26:45.546 11:10:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 85920 00:26:45.546 Received shutdown signal, test time was about 1.000000 seconds 00:26:45.546 00:26:45.546 Latency(us) 00:26:45.546 [2024-12-05T11:10:12.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.546 [2024-12-05T11:10:12.705Z] =================================================================================================================== 00:26:45.546 [2024-12-05T11:10:12.705Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.547 11:10:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 85920 00:26:45.547 11:10:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85902 00:26:45.547 11:10:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85902 ']' 00:26:45.547 11:10:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85902 00:26:45.547 11:10:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:26:45.547 11:10:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:45.547 11:10:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85902 00:26:45.806 11:10:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:45.806 11:10:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:45.806 killing process with pid 85902 00:26:45.806 11:10:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85902' 00:26:45.806 11:10:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 85902 00:26:45.806 11:10:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 85902 00:26:46.065 00:26:46.065 real 0m6.131s 00:26:46.065 user 0m11.286s 00:26:46.065 sys 0m1.782s 00:26:46.065 11:10:13 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:46.065 ************************************ 00:26:46.065 END TEST keyring_linux 00:26:46.065 11:10:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:26:46.065 ************************************ 00:26:46.065 11:10:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:46.065 11:10:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:46.065 11:10:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:46.065 11:10:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:46.065 11:10:13 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:26:46.065 11:10:13 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:26:46.065 11:10:13 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:26:46.065 11:10:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:46.065 11:10:13 -- common/autotest_common.sh@10 -- # set +x 00:26:46.065 11:10:13 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:26:46.065 11:10:13 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:26:46.065 11:10:13 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:26:46.066 11:10:13 -- common/autotest_common.sh@10 -- # set +x 00:26:48.597 INFO: APP EXITING 00:26:48.597 INFO: killing all VMs 00:26:48.597 INFO: killing vhost app 00:26:48.597 INFO: EXIT DONE 00:26:49.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:49.163 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:49.422 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:50.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:50.358 Cleaning 00:26:50.358 Removing: /var/run/dpdk/spdk0/config 00:26:50.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:50.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:50.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:50.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:50.358 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:50.358 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:50.358 Removing: /var/run/dpdk/spdk1/config 00:26:50.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:50.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:50.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:50.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:50.358 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:50.358 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:50.358 Removing: /var/run/dpdk/spdk2/config 00:26:50.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:50.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:50.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:50.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:50.358 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:50.358 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:50.358 Removing: /var/run/dpdk/spdk3/config 00:26:50.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:50.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:50.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:50.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:50.358 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:50.358 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:50.358 Removing: /var/run/dpdk/spdk4/config 00:26:50.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:50.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:50.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:50.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:50.358 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:50.358 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:50.358 Removing: /dev/shm/nvmf_trace.0 00:26:50.358 Removing: /dev/shm/spdk_tgt_trace.pid56775 00:26:50.358 Removing: /var/run/dpdk/spdk0 00:26:50.358 Removing: /var/run/dpdk/spdk1 00:26:50.358 Removing: /var/run/dpdk/spdk2 00:26:50.358 Removing: /var/run/dpdk/spdk3 00:26:50.358 Removing: /var/run/dpdk/spdk4 00:26:50.358 Removing: /var/run/dpdk/spdk_pid56616 00:26:50.358 Removing: /var/run/dpdk/spdk_pid56775 00:26:50.358 Removing: /var/run/dpdk/spdk_pid56975 00:26:50.358 Removing: /var/run/dpdk/spdk_pid57062 00:26:50.358 Removing: /var/run/dpdk/spdk_pid57084 00:26:50.358 Removing: /var/run/dpdk/spdk_pid57199 00:26:50.358 Removing: /var/run/dpdk/spdk_pid57211 00:26:50.358 Removing: /var/run/dpdk/spdk_pid57351 00:26:50.358 Removing: /var/run/dpdk/spdk_pid57541 00:26:50.358 Removing: /var/run/dpdk/spdk_pid57689 00:26:50.618 Removing: /var/run/dpdk/spdk_pid57762 00:26:50.618 Removing: /var/run/dpdk/spdk_pid57846 00:26:50.618 Removing: /var/run/dpdk/spdk_pid57945 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58029 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58063 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58093 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58168 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58273 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58703 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58755 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58800 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58816 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58883 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58894 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58961 00:26:50.618 Removing: /var/run/dpdk/spdk_pid58977 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59028 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59040 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59086 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59103 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59229 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59259 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59347 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59681 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59699 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59735 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59743 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59764 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59783 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59791 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59814 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59833 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59843 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59862 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59881 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59895 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59910 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59929 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59943 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59958 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59977 00:26:50.618 Removing: /var/run/dpdk/spdk_pid59991 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60006 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60042 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60056 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60085 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60157 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60186 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60195 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60225 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60234 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60244 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60286 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60300 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60328 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60338 00:26:50.618 Removing: /var/run/dpdk/spdk_pid60347 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60357 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60365 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60376 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60380 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60395 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60418 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60450 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60455 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60488 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60496 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60505 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60540 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60557 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60584 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60593 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60595 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60608 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60610 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60623 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60625 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60638 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60720 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60762 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60869 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60903 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60942 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60962 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60979 00:26:50.878 Removing: /var/run/dpdk/spdk_pid60993 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61030 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61046 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61125 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61142 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61187 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61246 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61296 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61330 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61433 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61470 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61508 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61740 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61832 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61866 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61890 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61929 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61957 00:26:50.878 Removing: /var/run/dpdk/spdk_pid61998 00:26:50.878 Removing: /var/run/dpdk/spdk_pid62028 00:26:50.878 Removing: /var/run/dpdk/spdk_pid62439 00:26:50.878 Removing: /var/run/dpdk/spdk_pid62477 00:26:50.878 Removing: /var/run/dpdk/spdk_pid62821 00:26:50.878 Removing: /var/run/dpdk/spdk_pid63286 00:26:50.878 Removing: /var/run/dpdk/spdk_pid63545 00:26:50.878 Removing: /var/run/dpdk/spdk_pid64424 00:26:50.878 Removing: /var/run/dpdk/spdk_pid65353 00:26:50.878 Removing: /var/run/dpdk/spdk_pid65476 00:26:51.137 Removing: /var/run/dpdk/spdk_pid65538 00:26:51.137 Removing: /var/run/dpdk/spdk_pid66959 00:26:51.137 Removing: /var/run/dpdk/spdk_pid67282 00:26:51.137 Removing: /var/run/dpdk/spdk_pid70827 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71185 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71294 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71428 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71457 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71491 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71514 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71619 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71756 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71904 00:26:51.137 Removing: /var/run/dpdk/spdk_pid71990 00:26:51.137 Removing: /var/run/dpdk/spdk_pid72182 00:26:51.137 Removing: /var/run/dpdk/spdk_pid72265 00:26:51.137 Removing: /var/run/dpdk/spdk_pid72352 00:26:51.137 Removing: /var/run/dpdk/spdk_pid72713 00:26:51.137 Removing: /var/run/dpdk/spdk_pid73131 00:26:51.137 Removing: /var/run/dpdk/spdk_pid73132 00:26:51.137 Removing: /var/run/dpdk/spdk_pid73133 00:26:51.137 Removing: /var/run/dpdk/spdk_pid73403 00:26:51.137 Removing: /var/run/dpdk/spdk_pid73677 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74068 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74070 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74407 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74421 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74435 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74468 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74478 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74830 00:26:51.137 Removing: /var/run/dpdk/spdk_pid74879 00:26:51.137 Removing: /var/run/dpdk/spdk_pid75206 00:26:51.137 Removing: /var/run/dpdk/spdk_pid75402 00:26:51.137 Removing: /var/run/dpdk/spdk_pid75838 00:26:51.137 Removing: /var/run/dpdk/spdk_pid76397 00:26:51.137 Removing: /var/run/dpdk/spdk_pid77242 00:26:51.137 Removing: /var/run/dpdk/spdk_pid77895 00:26:51.137 Removing: /var/run/dpdk/spdk_pid77897 00:26:51.137 Removing: /var/run/dpdk/spdk_pid80193 00:26:51.137 Removing: /var/run/dpdk/spdk_pid80248 00:26:51.137 Removing: /var/run/dpdk/spdk_pid80308 00:26:51.137 Removing: /var/run/dpdk/spdk_pid80364 00:26:51.137 Removing: /var/run/dpdk/spdk_pid80482 00:26:51.137 Removing: /var/run/dpdk/spdk_pid80540 00:26:51.137 Removing: /var/run/dpdk/spdk_pid80601 00:26:51.137 Removing: /var/run/dpdk/spdk_pid80661 00:26:51.137 Removing: /var/run/dpdk/spdk_pid81029 00:26:51.137 Removing: /var/run/dpdk/spdk_pid82244 00:26:51.137 Removing: /var/run/dpdk/spdk_pid82384 00:26:51.137 Removing: /var/run/dpdk/spdk_pid82632 00:26:51.137 Removing: /var/run/dpdk/spdk_pid83237 00:26:51.137 Removing: /var/run/dpdk/spdk_pid83402 00:26:51.137 Removing: /var/run/dpdk/spdk_pid83558 00:26:51.137 Removing: /var/run/dpdk/spdk_pid83655 00:26:51.137 Removing: /var/run/dpdk/spdk_pid83821 00:26:51.137 Removing: /var/run/dpdk/spdk_pid83930 00:26:51.397 Removing: /var/run/dpdk/spdk_pid84653 00:26:51.397 Removing: /var/run/dpdk/spdk_pid84690 00:26:51.397 Removing: /var/run/dpdk/spdk_pid84726 00:26:51.397 Removing: /var/run/dpdk/spdk_pid84992 00:26:51.397 Removing: /var/run/dpdk/spdk_pid85026 00:26:51.397 Removing: /var/run/dpdk/spdk_pid85058 00:26:51.397 Removing: /var/run/dpdk/spdk_pid85524 00:26:51.397 Removing: /var/run/dpdk/spdk_pid85541 00:26:51.397 Removing: /var/run/dpdk/spdk_pid85780 00:26:51.397 Removing: /var/run/dpdk/spdk_pid85902 00:26:51.397 Removing: /var/run/dpdk/spdk_pid85920 00:26:51.397 Clean 00:26:51.397 11:10:18 -- common/autotest_common.sh@1453 -- # return 0 00:26:51.397 11:10:18 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:26:51.397 11:10:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.397 11:10:18 -- common/autotest_common.sh@10 -- # set +x 00:26:51.397 11:10:18 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:26:51.397 11:10:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.397 11:10:18 -- common/autotest_common.sh@10 -- # set +x 00:26:51.397 11:10:18 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:51.397 11:10:18 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:51.656 11:10:18 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:51.656 11:10:18 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:26:51.656 11:10:18 -- spdk/autotest.sh@398 -- # hostname 00:26:51.656 11:10:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:51.656 geninfo: WARNING: invalid characters removed from testname! 00:27:18.293 11:10:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:19.694 11:10:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:22.229 11:10:48 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:24.133 11:10:51 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:26.671 11:10:53 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:28.577 11:10:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:30.481 11:10:57 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:30.481 11:10:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:30.481 11:10:57 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:30.481 11:10:57 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:30.481 11:10:57 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:30.481 11:10:57 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:30.740 + [[ -n 5207 ]] 00:27:30.740 + sudo kill 5207 00:27:30.750 [Pipeline] } 00:27:30.767 [Pipeline] // timeout 00:27:30.773 [Pipeline] } 00:27:30.790 [Pipeline] // stage 00:27:30.798 [Pipeline] } 00:27:30.812 [Pipeline] // catchError 00:27:30.823 [Pipeline] stage 00:27:30.825 [Pipeline] { (Stop VM) 00:27:30.838 [Pipeline] sh 00:27:31.123 + vagrant halt 00:27:34.472 ==> default: Halting domain... 00:27:41.059 [Pipeline] sh 00:27:41.337 + vagrant destroy -f 00:27:44.687 ==> default: Removing domain... 00:27:44.699 [Pipeline] sh 00:27:44.978 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:27:44.987 [Pipeline] } 00:27:45.006 [Pipeline] // stage 00:27:45.011 [Pipeline] } 00:27:45.029 [Pipeline] // dir 00:27:45.035 [Pipeline] } 00:27:45.052 [Pipeline] // wrap 00:27:45.058 [Pipeline] } 00:27:45.072 [Pipeline] // catchError 00:27:45.084 [Pipeline] stage 00:27:45.087 [Pipeline] { (Epilogue) 00:27:45.102 [Pipeline] sh 00:27:45.381 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:51.955 [Pipeline] catchError 00:27:51.957 [Pipeline] { 00:27:51.971 [Pipeline] sh 00:27:52.255 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:52.513 Artifacts sizes are good 00:27:52.522 [Pipeline] } 00:27:52.539 [Pipeline] // catchError 00:27:52.550 [Pipeline] archiveArtifacts 00:27:52.557 Archiving artifacts 00:27:52.732 [Pipeline] cleanWs 00:27:52.743 [WS-CLEANUP] Deleting project workspace... 00:27:52.743 [WS-CLEANUP] Deferred wipeout is used... 00:27:52.750 [WS-CLEANUP] done 00:27:52.752 [Pipeline] } 00:27:52.768 [Pipeline] // stage 00:27:52.774 [Pipeline] } 00:27:52.789 [Pipeline] // node 00:27:52.795 [Pipeline] End of Pipeline 00:27:52.832 Finished: SUCCESS